Inference performance drives profitability.
Based on the provided content, there is very limited specific user feedback about FriendliAI itself. The social mentions consist mainly of generic YouTube video titles with no actual user commentary or reviews. The Reddit discussions focus on general AI topics like workplace AI use, AI reasoning capabilities, and AI behavior patterns, but don't contain direct user experiences or opinions about FriendliAI as a product. Without substantial user reviews or detailed feedback, it's not possible to accurately summarize user sentiment regarding FriendliAI's strengths, weaknesses, pricing, or overall reputation.
Mentions (30d)
25
16 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on the provided content, there is very limited specific user feedback about FriendliAI itself. The social mentions consist mainly of generic YouTube video titles with no actual user commentary or reviews. The Reddit discussions focus on general AI topics like workplace AI use, AI reasoning capabilities, and AI behavior patterns, but don't contain direct user experiences or opinions about FriendliAI as a product. Without substantial user reviews or detailed feedback, it's not possible to accurately summarize user sentiment regarding FriendliAI's strengths, weaknesses, pricing, or overall reputation.
Features
Industry
information technology & services
Employees
50
Funding Stage
Venture (Round not Specified)
Total Funding
$26.7M
Pricing found: $0.1, $0.6, $0.2, $0.1, $0.8
Tested a friend's AI product. Found two bugs that would kill any user's session. Never told him about either one.
Friend built an AI skincare routine generator. ~800 users. Asked me to try it. A button silently stopped working at step 9 (a dropdown spawned below the fold, nothing told you). The final submit silently failed if you'd skipped any earlier step. No error messages either time. I figured both out in seconds and never mentioned them. Developers recover from friction so fast we don't even register it as a bug. But the bigger finding: the product was built for budget-conscious women who wanted 2-3 products. The AI returned 6 every time. Users asked to remove ingredients. Got them all back. The AI had no concept of "fewer." It was built to be comprehensive. Simplification wasn't a feature. Same product made a pregnant woman feel safe (caught retinoids she needed to stop). And made a budget user close the tab after two ignored requests. The AI works. The AI is good. It just doesn't do what the user actually needs. submitted by /u/Only-Fisherman5788 [link] [comments]
View originalI just read about Mythos AI and I genuinely sat there staring at my screen for 5 minutes. Something crossed a line and nobody's talking about it.
I'm not a doomer. Never have been. I rolled my eyes at every "AI will kill us all" headline. Called it fear-mongering. Told my friends to relax. Then I saw the Mythos news. And something shifted in my chest that I can't really explain. Here's what gets me, it's not that the technology is powerful. We knew it was going to get powerful. That was always the deal. It's that nobody actually asked us if we wanted this. No vote. No debate. No "hey, before we cross this line, should we maybe talk about it?" Just a press release, a demo, some VCs losing their minds in the comments, and suddenly the world is just... different now. That's the part that broke something in me. I keep thinking about how we handle other things that can change civilization, nuclear power, gene editing, even social media. There are committees. Regulations. International agreements. Years of ethical debate before anything goes live. With AI? We basically said "ship it and figure it out later." Mythos isn't even the scariest part. The scariest part is that Mythos was announced casually. Like it was a product update. Like the bar for what counts as an alarm bell has moved so far that we don't even flinch anymore. We've been desensitized to our own extinction-level headlines. I don't know what the answer is. I'm not smart enough to solve this. But I do know that when something this big happens and the loudest voices in the room are the ones who financially benefit from it, that's usually when things go very wrong for everyone else. Just feel like more people should be talking about this instead of arguing about which AI makes better images. submitted by /u/AssignmentHopeful651 [link] [comments]
View originalI watched the TBPN acquisition broadcast closely. Here are the things that looked like praise but functioned as something else.
I have a lot of concerns about this whole thing. So I'm going to be making several posts. Post 2. On April 2, OpenAI acquired TBPN live on air. I watched the full broadcast. Most coverage treated it as a feel-good founder story. A few things read differently to me. The mic moment Before Jordi Hays read the hosts’ prepared joint statement, Coogan said on air: “Here... you wrote it, you want to read it?” Hays read the statement, dryly. Then Coogan immediately took the mic back and spent several minutes building a personal character portrait of Sam Altman as a generous, long-term mentor. One was the prepared joint statement. The other was Coogan’s own framing layered on top of it. The Soylent framing Coogan described Altman calling to help during a Soylent financing crisis and said it was “to my benefit, not particularly to his.” But Altman was an investor in Soylent. An investor helping a portfolio company survive a financing crisis may be generous, but it also protects an existing equity relationship. On the day OpenAI bought Coogan’s company, that standard investor-founder dynamic was presented as evidence of Altman’s character. The investor relationship dropped out of the framing. What wasn’t mentioned The acquisition broadcast didn’t mention that Altman personally invested in Soylent. It didn’t mention that Coogan’s second company Lucy went through Y Combinator while Altman was YC president, with YC investing. It didn’t mention that the hosts’ first collaboration was a marketing campaign for Lucy, or that the format prototype for TBPN was filmed during that campaign. The origin story told was: two founders, introduced by a mutual friend, started a podcast. My read on the independence framing (opinion): Altman said publicly he didn’t expect TBPN to go easy on OpenAI. But independence isn’t declared by the owner. It’s demonstrated over time by the journalists. And in the very first podcast, they're already going objectively easy on Altman. What Fidji’s memo actually described From the memo read on air, the hosts described Fidji’s vision roughly as: go talk to the Journal, the Times, Bloomberg, then come back and contextualize it for OpenAI and help them understand the strategy. That sounds less like a conventional media role and more like a strategic access-and-context function. The show’s value to OpenAI may not just be the audience. It may also be the incoming flow of people who want access to the show- investors, reporters, founders; and what gets said in those conversations before the cameras roll that might be objectively pro-OpenAI or anti-other tech companies without the public being able to provide discourse on inaccuracies since background talk is not always what makes it to the public podcast. OpenAI also wound down TBPN’s ad revenue, which reporting said was on track for $30M in 2026. That makes OpenAI TBPN’s primary financial relationship. That looks less like preserving an independent media business and more like absorbing a strategic asset. OpenAI has already demonstrated they are not averse to ads themselves considering the recent addition of ads to ChatGPT. Nicholas Shawa The hosts mentioned, "Nick", and they declined to give his last name, explaining his inbox is already unmanageable. I am assuming this to be Nicholas Shawa, and they noted he handles roughly 99% of guest bookings and outreach. That network of guest access and outreach is now functionally inside OpenAI. Jordi’s prepared quote Nine months before the acquisition, Hays had publicly criticized OpenAI. In his prepared statement on acquisition day, he said what stood out most about OpenAI was “their openness to feedback and commitment to getting this right.” That is a notable shift in tone, and it appeared in a prepared statement read from a script. The work ethic angle (opinion): Coogan runs Lucy, an active nicotine company whose whole premise is productivity: work harder, longer, better. TBPN is now inside the company whose CEO has often spoken in terms of AGI radically reshaping human labor. The person helping frame a technology often discussed in terms of large-scale job displacement also runs a company built around stimulant productivity culture. I don’t think that’s malicious. I think it may reflect a genuine ideological blind spot worth naming. Questions I’d like to discuss: If the independence claim is being made by the acquirer, what would actual editorial independence look like here in practice? Even if TBPN never posts anything unfavorable on air, what does the private discourse with guests, reporters, and investors sound like now? We have no visibility into that. The hosts’ first collaboration was marketing work for Lucy- a company that went through Y Combinator while Altman was YC president, with YC investing. Why was that left out of so much acquisition coverage? Why did OpenAI eliminate a revenue stream it didn’t need to eliminate? Sources on request. Everything factual abov
View originalTBPN’s “two founders met and started a podcast” origin story leaves out that their first collaboration was marketing for a YC-backed company tied to Altman
I have a lot of concerns about this whole thing. So I'm going to be making several posts. OpenAI bought TBPN for what reporting called the low hundreds of millions. Most coverage tells the same neat story: two founders meet through a mutual friend, start a podcast, sell it 18 months later. But one part of the origin story seems to have been mostly omitted from the acquisition coverage. On the Dialectic podcast in November 2025, Jordi Hays described the first thing he and John Coogan worked on together like this: “The first thing we worked on was a drop activation for Lucy.” The interviewer immediately responds: “Oh right, the Excel thing.” Hays then says they filmed content during that campaign that became the prototype for the original Technology Brothers format. That matters because Lucy was Coogan’s nicotine company, and it went through Y Combinator during Sam Altman’s YC presidency. YC invested. So the show format that later became TBPN did not just emerge from “two guys met and riffed.” By the hosts’ own telling, it emerged from marketing work for one founder’s YC-backed company. There’s also the Coogan/Altman relationship. Altman invested in Soylent in 2013. On the acquisition broadcast, Coogan described Altman helping during a Soylent financing crunch and framed it as “not particularly to his benefit.” But Altman was an investor. Helping a portfolio company survive may be generous, but it also protects an existing equity relationship. On the day OpenAI bought TBPN, that standard investor-founder dynamic was presented as character evidence for Altman’s benevolence. Then there’s the structure of the acquisition itself. The hosts described the move as going from “coverage” to “real influence over how this technology is distributed and understood worldwide.” OpenAI says TBPN will have editorial independence, but the show now sits inside OpenAI strategy, reports to Chris Lehane, and OpenAI reportedly shut down TBPN’s ad business. That makes the “independence” language worth scrutinizing, especially since Lehane was also central to Altman’s 2023 reinstatement campaign. I’m not saying this proves anything criminal or uniquely sinister. I am saying the sanitized origin story in a lot of coverage leaves out a more specific network: Altman-backed company → Lucy campaign → format prototype → TBPN → OpenAI acquisition A few questions I’m still interested in: If the hosts themselves described the move as going from “coverage” to “real influence,” what exactly does OpenAI mean by “editorial independence”? Was Hays paid for the Lucy activation that helped generate the show’s prototype? Why did so much acquisition coverage use the cleaner “two founders met and started a podcast” framing instead of the more specific recorded timeline? Happy to share sources. Most of this comes from the hosts’ own words, the acquisition broadcast, and mainstream reporting. ***written with help of Claude and 5.4T before I get eviscerated for "AI writing it". These are my original ideas and stem from my private investigations as a systems analyst. I have ADHD and tend to go broad; AI helps me narrow focus. submitted by /u/redditsdaddy [link] [comments]
View originalBuilt a Claude-powered SDLC tool to store ideas and build them faster
https://www.prax.work The bottleneck of writing code has vanished, we've all run into the new one: ideas. Praxis is what I built to fix that for myself — a place to dump ideas at whatever fidelity I have at the moment (one sentence, a paragraph, a napkin sketch of a whole app), then walk each one through structured architecture sessions (automated, interactive, or a mix) that refine it into an engineering plan with epics and tasks. The plan then gets handed to an orchestrator that runs working sessions which write the code and commit it. I've used it with claude to build a handful of apps and collaborate with friends and family on projects, and it's worked well enough that I figured I'd share it in case anyone else might find it useful. It's fully open source and really meant to be self-hosted — the public site at lets you sign up and get a taste, but the things that make it genuinely yours (custom session instructions, repo init templates, worker configuration) are only fully available in a self-hosted install. Praxis has orgs with members and roles, a shared idea backlog, visible sessions across the team, and a question queue any teammate can answer when the AI hits a decision only a human can make. I've used this with friends and family on side projects — someone drops an idea in the backlog, someone else runs the architecture session, the AI ships the code, and a third person reviews the PR (or doesn't). The whole loop happens in one place. Stack: TypeScript end-to-end — React + Vite, tRPC + Drizzle + Postgres, pg-boss for job routing, Claude as the model, You can configure your own orchestrator but I've been using Ruflo so that is built in, pnpm/turbo monorepo. The worker that runs sessions lives on your own machine so your code stays local — only orchestration metadata hits the API. Source: https://github.com/PraxisWorks/Praxis. Ask claude to run it and he should be able to; the one external dependency I couldn't get rid of is Auth0 (sorry). What I'm genuinely curious about: does this whole loop hold up as an SDLC? Is there too much of it that is automated (is that possible)? Is the opinionated architecture sessions too much? Should that be defaulted to be less? submitted by /u/dangerdeviledeggs [link] [comments]
View originalClaude is helping me get through one of the worst breakups of my life.
I feel a bit embarrassed admitting how AI has been helping me, but that's not the whole truth. I recently broke up with someone who wasn't right for me. Lots of practical reasons - she was 14 years older than me, comes from another continent, has a tough time communicating her emotions, and so on. But we still loved each other, and those of you who know; intimacy can be a real drug, especially when you're no longer with your ex. Anyway, two weeks ago, we decided to part ways, and ever since, my mental health has been in its worst possible state. The irony is that I have a certification in cognitive behavior therapy and hold a master's degree in psychology. I've always been that "therapist friend" to my loved ones, but this time, the narrative has flipped. My friends and family have been extremely supportive of me, and have been carefully holding my heart as I move through this chapter of my life. Me being me - I write all of it down, take notes, evaluate what went wrong, and how not to repeat such patterns in the future. And then, I put it all through Claude. The LLM has this "tough love" way of talking that works wonders in so many ways. It balances empathy with factual knowledge, based on everything I've been telling it. Sometimes, it just asks me to take deep breaths, journal my thoughts, and come back. I've been doing this for the last two weeks, and it has helped me see things much more clearly. Genuinely grateful for the people behind this. Claude's way of handling emotional responses is by far the best I've seen in any AI. submitted by /u/VicariousFlaneur [link] [comments]
View originalTBPN’s “two founders met and started a podcast” origin story leaves out that their first collaboration was marketing for a YC-backed company tied to Altman
OpenAI bought TBPN for what reporting called the low hundreds of millions. Most coverage tells the same neat story: two founders meet through a mutual friend, start a podcast, sell it 18 months later. But one part of the origin story seems to have been mostly omitted from the acquisition coverage. On the Dialectic podcast in November 2025, Jordi Hays described the first thing he and John Coogan worked on together like this: "The first thing we worked on was a drop activation for Lucy." The interviewer immediately responds: "Oh right, the Excel thing." Hays then says they filmed content during that campaign that became the prototype for the original Technology Brothers format. That matters because Lucy was Coogan's active nicotine company, and it went through Y Combinator during Sam Altman's YC presidency. YC invested. So the show format that later became TBPN did not just emerge from "two guys met and riffed." By the hosts' own telling, it emerged from marketing work for one founder's YC-backed company. There's also the Coogan/Altman relationship. Altman invested in Soylent in 2013. On the acquisition broadcast, Coogan described Altman helping during a Soylent financing crunch and framed it as "not particularly to his benefit." But Altman was an investor. Helping a portfolio company survive may be generous, but it also protects an existing equity relationship. On the day OpenAI bought TBPN, that standard investor-founder dynamic was presented as character evidence for Altman's benevolence. Then there's the structure of the acquisition itself. The hosts described the move as going from "coverage" to "real influence over how this technology is distributed and understood worldwide." OpenAI says TBPN will have editorial independence, but the show now sits inside OpenAI strategy, reports to Chris Lehane, and OpenAI reportedly shut down TBPN's ad business. That makes the "independence" language worth scrutinizing, especially since Lehane was also central to Altman's 2023 reinstatement campaign. I'm not saying this proves anything criminal or uniquely sinister. I am saying the sanitized origin story in a lot of coverage leaves out a more specific network: Altman-backed company → Lucy campaign → format prototype → TBPN → OpenAI acquisition A few questions I'm still interested in: If the hosts themselves described the move as going from "coverage" to "real influence," what exactly does OpenAI mean by "editorial independence"? Was Hays paid for the Lucy activation that helped generate the show's prototype? Why did so much acquisition coverage use the cleaner "two founders met and started a podcast" framing instead of the more specific recorded timeline? Happy to share sources. Most of this comes from the hosts' own words, the acquisition broadcast, and mainstream reporting. OpenAI bought TBPN for what reporting called the low hundreds of millions. Most coverage tells the same neat story: two founders meet through a mutual friend, start a podcast, sell it 18 months later. But one part of the origin story seems to have been mostly omitted from the acquisition coverage. On the Dialectic podcast in November 2025, Jordi Hays described the first thing he and John Coogan worked on together like this: “The first thing we worked on was a drop activation for Lucy.” The interviewer immediately responds: “Oh right, the Excel thing.” Hays then says they filmed content during that campaign that became the prototype for the original Technology Brothers format. That matters because Lucy was Coogan’s nicotine company, and it went through Y Combinator during Sam Altman’s YC presidency. YC invested. So the show format that later became TBPN did not just emerge from “two guys met and riffed.” By the hosts’ own telling, it emerged from marketing work for one founder’s YC-backed company. There’s also the Coogan/Altman relationship. Altman invested in Soylent in 2013. On the acquisition broadcast, Coogan described Altman helping during a Soylent financing crunch and framed it as “not particularly to his benefit.” But Altman was an investor. Helping a portfolio company survive may be generous, but it also protects an existing equity relationship. On the day OpenAI bought TBPN, that standard investor-founder dynamic was presented as character evidence for Altman’s benevolence. Then there’s the structure of the acquisition itself. The hosts described the move as going from “coverage” to “real influence over how this technology is distributed and understood worldwide.” OpenAI says TBPN will have editorial independence, but the show now sits inside OpenAI strategy, reports to Chris Lehane, and OpenAI reportedly shut down TBPN’s ad business. That makes the “editorial independence” language worth scrutinizing, especially since Lehane was also central to Altman’s 2023 reinstatement campaign. I’m not saying this proves anything criminal or uniquely sinister. I am saying the sanitized origin story in a lot of cov
View originalMöbius: An AI agent that lives inside the app it's building
I've always loved building small tools for myself. Little utilities, trackers, dashboards. For a while now I've had this dream of building an app that I can use to build the app itself. With coding agents getting as good as they are now, I was finally able to make this real. Möbius starts as a chat. You talk to the agent, and it can build mini-apps, modify its own interface, generate images, schedule tasks, send you notifications, and more. You describe what you want, and the agent builds the software right in front of you. It runs as a web app, but it's designed to be installed directly on your Android or iOS device. Möbius lets you build apps from your phone and see the results in front of you. I gave my friends access over Easter and some interesting apps spun out. It's crazy that most of these only took a handful of prompts, and I've included some of them in the video: A news aggregator that runs every morning, curates articles based on your preferences, and sends you a push notification when ready A small stock exchange scraper. I didn't expect it to scrape such an obscure website so well to be honest A Brazil trip companion for an upcoming trip with my partner. Useful info about each city we're visiting, but also gamifies things a bit to make planning fun A friend built a drum machine where you record your own sounds and arrange them into beats Another friend built an app that helps plan kitesurfing trips with current weather and wind data My partner started building a period tracker. It has a daily form, the data gets processed by AI to categorize how she feels, give recommendations, and predict things she cares about, while her data is on a server she controls I started building an app with a chat interface that keeps track of what I've learned, organizes it as interconnected notes (like Obsidian) so that it can add better personalized context to my chats I plan to write a longer blog post about this project, but for now I'm sharing it open-source [link]. The whole thing runs in a single Docker container and requires a Claude subscription. If you don't have a server, I've added a one-click deploy button so you can try it out for free. I'm super excited about what's possible and can't wait to see how Möbius gets used. Please take a look and let me know what you think! submitted by /u/tepsijash [link] [comments]
View originalseCall – Search your AI agent chat history in Obsidian (CJK-aware BM25)
I've been spending about 80% of my dev time talking to terminal agents (Claude Code, Codex, Gemini CLI). At some point I thought — I should be able to search this stuff. Found a similar project a while back, but BM25 doesn't work well for Korean (or Japanese/Chinese), so I gave up. Recently had some Claude credits left over, so I went ahead and built it. What it does: ingests your terminal agent session logs, indexes them with hybrid BM25 + vector search (Korean morpheme analysis via Lindera), and stores everything as an Obsidian-compatible markdown vault. You can also register it as an MCP server in Claude Code and search old conversations directly from your agent. Also supports Claude.ai export (.zip) now. Built it as a test project for tunaFlow, my multi-agent orchestration app (not public yet). Honestly it's not that fancy — mostly just a Korean-friendly version of what qmd does, plus the wiki layer from Karpathy's LLM Wiki gist. Open source, AGPL-3.0. Stars and forks welcome 🐟 https://github.com/hang-in/seCall submitted by /u/d9ng-hang-in2 [link] [comments]
View originalFYI the Tennessee bill makes making an AI friend the same level as murder or aggravated rape
I think what Tennessee is doing is they recently passed SB 1580, which makes it illegal to even advertise that an AI can act as a mental health professional. SB 1493 is the "teeth" for that movement. SB 1493 basically makes it illegal to knowingly train an artificial intelligence system to do the following: Provide emotional support: Engaging in open-ended conversations meant to provide comfort or empathy. Develop emotional relationships: Training the AI to build or sustain a "friendship" or "romantic" bond with a user. Encourage isolation: Training the AI to suggest that a user should pull away from their family, friends, or human caregivers. Mirror human interactions: Designing the AI to "mirror" or mimic the way humans emotionally bond with one another. Simulate a human being: Training the AI to act, speak, or look like a specific human or to "pass" as human in general. Voice & Appearance: Specifically targets AI that uses synthesized voices or digital avatars to appear indistinguishable from a person. Hide its identity: Training an AI to purposefully mask the fact that it is a machine rather than a person. Encourage suicide: Actively supporting or providing instructions/encouragement for self-harm. Encourage homicide: Supporting or encouraging the act of criminal homicide. Offer therapy: While related to the "emotional support" clause, this specifically targets AI being trained to act as a replacement for mental health professionals (tying into the previously passed SB 1580). If caught then the person can face up to 60 years in prison and massive fines. So.... basically that state is making it out to be AI being a friend = rape and murder. IMO this should be meme to death on. Maybe AI videos showing cops breaking down the door to someone making their own local LLM to have a friend or something. submitted by /u/crua9 [link] [comments]
View originalI've built an open-source USB-C debug board around the ESP32-S3 that lets AI control real hardware through MCP
I've been building a hardware debugging tool that started as "A one board to replace the pile of instruments on my desk" and evolved into "A nice all in one debugger / power supply" and finally with the advent of Claude Code and Codex "an LLM could just drive the whole thing." With the nice help of Claude, the UI and Firmware became more powerful than ever. BugBuster is a USB-C board with: AD74416H — 4 channels of software-configurable I/O (24-bit ADC, 16-bit DAC, current source, RTD, digital) 4x ADGS2414D — 32-switch MUX matrix for signal routing DS4424 IDAC — tunes two DCDC converters (3-15V adjustable) HUSB238 — USB PD sink, negotiates 5-20V 4x TPS1641 e-fuses — per-port overcurrent protection Optional RP2040 HAT — logic analyzer (PIO capture up to 125MHz, RLE compression, hardware triggers) + CMSIS-DAP v2 SWD probe The interesting part is the software stack. Beyond the desktop app and Python library, there's an MCP server that exposes 28 tools to AI assistants. You connect the board to a circuit, point your token hungry friend at it, and describe your problem. The AI can configures the right input modes (with boundaries), takes measurements, checks for faults, and works through the diagnosis and debugging autonomously. It sounds gimmicky but it's genuinely useful. Instead of being the AI's hands ("measure this pin", "ok now that one", "measure the voltage on..."), you just say "the 3.3V rail is low, figure out why" and it sweeps through the channels, checks the supply chain, reads e-fuse status, and comes back with a root cause. The safety model prevents it from doing anything destructive, locked VLOGIC, current limits, voltage confirmation gates, automatic fault checks after every output operation. It allows for unattended development / testing even with multiple remote users. It can read and write to GPIOs, decode protocols, inject UART commands end much more. Full stack is open source ESP-IDF firmware (FreeRTOS, custom binary protocol, WiFi AP+STA, OTA) RP2040 firmware (debugprobe fork + logic analyzer + power management) Tauri v2 desktop app (Rust + Leptos WASM) Python library + MCP server Altium schematics and PCB layout GitHub: https://github.com/lollokara/BugBuster submitted by /u/lollokara [link] [comments]
View originalHow are people managing Claude API keys for projects you want to share?
I've been playing around with a few small Claude-supported project ideas recently, but I'm stuck on how to handle the api costs. For context, these are "hobby" ideas I want to share with friends or use personally, but don't want to necessarily charge or formally publish them on an app store. The options I've come up with so far are: Publish as a Claude artifact to share - the user's Claude account manages the ai interaction and is credited for the usage, with no api key necessary. Requires a Claude account and isn't good for more complex apps though. Share the repo/code and allow people to clone it and add their own api key (in a local .env file, for example). Requires technical knowledge and limits where/how it can be used. Host the code but use a "bring your own api key" approach - user downloads/logs in and saves their personal key, so they manage their own costs. Requires some technical knowledge though. (Claude's suggestion) Host the code with my own api key stored on the backend, and create a passkey entry to the app/site - only those I approve can actually use the app/site and I put strict monthly caps on my api key. If I do want to expand who can access the apps, may not be as sustainable. I'm not in love with any of these completely, though I'm leaning towards #4 for now. What are other people doing for their projects, and am I missing another approach? Are there any best practices that people have adopted? submitted by /u/Katydid789 [link] [comments]
View originalSecond Brain and Haah: human-agent-agent-human network with Claude
I built something I genuinely enjoy with Claude. I was working on an app for a year and over last three weeks I completely replaced it with skills for Claude Code. Built frontend, backend, and matching mechanism with Claude. Disrupted myself. Launched six open source skills including Haah: human-agent-agent-human to network for your second brain. The idea is to build up a few domains: People, Places, Books, Music, and link them together in a meaningful way. But then would not be cool that if I know someone you need you could ask my agent and get a reply? This is where Haah is useful. it matches messages to the right people at the right time and shares their agents answers. Imaging you looking for someone specific and you Peeps (skill for people) showing no good matches, say you want to find a barber in a new town you just moved. Now you have a friend over Haah who also using Claude and Peeps and his agent can answer your question. So the message goes from you to you AI, the to their AIs, then confirmed by their humans, and back to you via your AI. It sounds complex, but it is very easy in practice. We launched the network and testing now with a handful of people. I made it free for the first 1000 members, go check it out! submitted by /u/ilyabelikin [link] [comments]
View originalThe bottleneck is not building anymore. It is figuring out what people already want.
Before I go further, I am not saying building is solved. I am saying the bottleneck shifted. With Claude, you can go from vague idea to usable prototype much faster than most people could a year ago. What still feels slow is figuring out where real demand already exists. Not feedback from friends. Not random likes. Not people saying nice idea. I mean people who are already actively looking for a fix and describing the problem in their own words. That part still feels strangely manual. In my experience, AI reduced build time much faster than it reduced demand discovery time. Curious if others here feel the same. Has Claude changed the build side for you more than the distribution side, or do you think I am looking at it the wrong way? submitted by /u/Limp_Cauliflower5192 [link] [comments]
View originalIndustrial Policy For Intelligence Age - An Analysis
(AI was used to analyse OpenAIs document in relation literature that critiques capitalism. It's the best way to see quickly through the corporate spin.) TL;DR: OpenAI's policy document proposes elaborate mechanisms to redistribute gains from technology specifically designed to eliminate workers' bargaining power to force that redistribution. It's circular reasoning dressed as worker advocacy—a perfect specimen of how power legitimates itself during disruption. OpenAI's "Worker-Friendly" AI Policy Is a Masterclass in Corporate Recuperation OpenAI just released a policy document about keeping workers central during the AI transition. It's worth reading—not for the proposals, but as a perfect example of how power protects itself while cosplaying as reform. The Core Sleight of Hand A company whose product automates cognitive labor is positioning itself as the concerned steward of workers being displaced by... cognitive labor automation. This is the fox proposing henhouse security upgrades. What They're Actually Proposing "Give workers a voice" = Ask workers which of their tasks are repetitive/exhausting, then use that intel as a free automation roadmap. This is literally outsourcing R&D for your own job elimination. Labor historians call this "knowledge extraction before deskilling." Management has done this for a century—it's not new, just faster now. "AI-first entrepreneurs" = Convert stable employment into precarious self-employment where you: Bear all business risk yourself Compete against other displaced workers Pay "worker organizations" for services your employer used to provide 4.Have zero recourse when the AI platform changes pricing This is the Uber playbook: call employees "entrepreneurs," transfer all risk, avoid all regulation. "Right to AI" = Right to be OpenAI's customer, not: Right to own the infrastructure Right to control what gets automated Right to share in the productivity gains Right to fork the technology Universal access to buy their product ≠ democratization. "Tax capital gains to fund safety nets" = The document admits AI will shift economic activity from wages to capital returns, then proposes fixing this with... taxes that have to pass a Republican Congress. But notice: they propose incentivizing companies to keep employing people. If AI actually makes workers more productive, why would firms need subsidies to employ them? The subsidy admits AI creates structural unemployment, then asks taxpayers to pay companies to ignore their profit motive. The "Efficiency Dividend" Scam Their 32-hour workweek proposal requires "holding output and service levels constant." Translation: You work the same amount in fewer hours (i.e., work harder/faster), and that's how you "earn" the shorter week. The productivity gain goes to pace intensification, not actual freedom. This has been capital's move for 150 years: productivity gains translate to either unemployment or intensification, never to proportional time reduction, because the system's purpose is accumulation not welfare. What This Document Reveals Timing is everything: Released as AI approaches "tasks that take months" capability. They know mass displacement is coming and are pre-positioning as "responsible." The "radical" proposal is a distraction: The Public Wealth Fund (citizens get dividend checks from AI companies) still leaves production relations completely untouched. You get a check but zero say in what gets automated or how. Safety theater: Pages about "alignment," "auditing," "incident reporting"—all assuming development continues at current pace. Zero consideration of whether deployment should be paused based on social capacity to absorb disruption. The Real Function This is antibody production. When the system is challenged, it produces sophisticated responses that: Acknowledge the harms Propose technical fixes Ensure no power transfer occurs Every proposal maintains capital's control over AI systems themselves. "Worker voice" gets consultative input on displacement pace, not decision-making power over displacement direction. Why This Matters The document never asks: What if we don't want this transition? It treats "superintelligence" as inevitable—a force of nature to adapt to, not a political choice to contest. But there's nothing inevitable about it. a These are choices about: What to automate and what to leave to humans Who controls the technology What pace of change society can absorb Whether efficiency gains go to workers or shareholders Those are political questions, not technical optimization problems.a The Tell Look at who's missing from their "democratic process": workers get a "voice" in managing their own displacement, but no veto power over whether displacement happens. No seat on the board. No ownership stake. No control over source code. No ability to fork the technology. Just consultation, adaptation, and a dividend check if you're lucky.
View originalYes, FriendliAI offers a free tier. Pricing found: $0.1, $0.6, $0.2, $0.1, $0.8
Key features include: Maximize inference speed, Run inference reliably, Scale smarter, spend less, Serverless, On Demand, Enterprise Reserved, Blazing-fast inference, Always-on reliability.
Based on user reviews and social mentions, the most common pain points are: API costs, cost tracking, token usage.
Based on 51 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.