AI that actually does bookkeeping work inside QBO/Xero - not just suggestions. Uses your existing bank connection. No Plaid, no extra setup. Try free.
Based on the limited content provided, I cannot find any direct user reviews or social mentions specifically about Booke.ai's features, functionality, or user experience. The social mentions shown are primarily generic YouTube titles repeating "Booke.ai AI" without actual review content, and Reddit discussions about general AI topics, story writing platforms, and automation tools that don't specifically reference Booke.ai. Without substantive user feedback about Booke.ai's performance, pricing, strengths, or weaknesses, I cannot provide a meaningful summary of user sentiment toward this tool. More detailed reviews and mentions would be needed to assess user opinions about Booke.ai.
Mentions (30d)
18
9 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on the limited content provided, I cannot find any direct user reviews or social mentions specifically about Booke.ai's features, functionality, or user experience. The social mentions shown are primarily generic YouTube titles repeating "Booke.ai AI" without actual review content, and Reddit discussions about general AI topics, story writing platforms, and automation tools that don't specifically reference Booke.ai. Without substantive user feedback about Booke.ai's performance, pricing, strengths, or weaknesses, I cannot provide a meaningful summary of user sentiment toward this tool. More detailed reviews and mentions would be needed to assess user opinions about Booke.ai.
Industry
accounting
Employees
3
Funding Stage
Seed
Total Funding
$0.3M
Pricing found: $129 / business, $129/month
~77% of all new "Success" self-help books on Amazon are likely written by AI, with 1 author, Noah Felix Bennett, publishing a stunning 74 books in mid-2025 alone, at a rate of >1 per day. Richard Trillion Mantey, who has published hundreds of books, was assessed to have used AI for every single book
"Ironically, one of the 844 books in this dataset is called 'How to Write for Humans in an AI World: Cutting Through Digital Noise and Reaching Real People'. In it, the author laments the proliferation of AI-written content: 'The words we see online, in our inboxes, even in news articles, often feel like they were written by no one in particular,' he writes. 'They’re grammatically perfect and emotionally empty. They’re fluent, but soulless. The irony is that we’ve never written more than we do today. We’re producing mountains of content: posts, captions, pitches, texts, and endless emails. At the same time, in the midst of all that noise, something essential is fading. It’s the sense that a real person is speaking to another real person.' That book’s contents were flagged as likely AI-generated." submitted by /u/StarlightDown [link] [comments]
View originalClaude Notch — free open-source app that turns the MacBook notch into a live Claude AI usage dashboard
I built a native macOS menu bar app that uses the dead space around the MacBook notch to display Claude AI usage stats. Hover over the notch → a dropdown panel appears with: - Live session & weekly usage with sparkline charts - Predictive analytics (when you'll hit your limit) - Pomodoro focus timer (shows in the notch while running) - CPU & RAM monitor with sparklines - Rich text notes - Full settings page Built with SwiftUI + AppKit. No Dock icon, no menu bar icon — lives entirely in the notch. Ctrl+Opt+C toggles it from anywhere. Native macOS app, ~700KB, open source, no telemetry. Download: https://github.com/acenaut/claude-notch/releases Source: https://github.com/acenaut/claude-notch Requires a Claude Pro/Max subscription to be useful. Works on non-notch Macs too (uses safe area insets). submitted by /u/Novel-Upstairs3947 [link] [comments]
View originalClaude Notch — free open-source app that turns the MacBook notch into a live Claude AI usage dashboard
I built a native macOS menu bar app that uses the dead space around the MacBook notch to display Claude AI usage stats. Hover over the notch → a dropdown panel appears with: - Live session & weekly usage with sparkline charts - Predictive analytics (when you'll hit your limit) - Pomodoro focus timer (shows in the notch while running) - CPU & RAM monitor with sparklines - Rich text notes - Full settings page Built with SwiftUI + AppKit. No Dock icon, no menu bar icon — lives entirely in the notch. Ctrl+Opt+C toggles it from anywhere. Native macOS app, ~700KB, open source, no telemetry. Download: https://github.com/acenaut/claude-notch/releases Source: https://github.com/acenaut/claude-notch Requires a Claude Pro/Max subscription to be useful. Works on non-notch Macs too (uses safe area insets). submitted by /u/Pretend_Eggplant_281 [link] [comments]
View originalI watched the TBPN acquisition broadcast closely. Here are the things that looked like praise but functioned as something else.
I have a lot of concerns about this whole thing. So I'm going to be making several posts. Post 2. On April 2, OpenAI acquired TBPN live on air. I watched the full broadcast. Most coverage treated it as a feel-good founder story. A few things read differently to me. The mic moment Before Jordi Hays read the hosts’ prepared joint statement, Coogan said on air: “Here... you wrote it, you want to read it?” Hays read the statement, dryly. Then Coogan immediately took the mic back and spent several minutes building a personal character portrait of Sam Altman as a generous, long-term mentor. One was the prepared joint statement. The other was Coogan’s own framing layered on top of it. The Soylent framing Coogan described Altman calling to help during a Soylent financing crisis and said it was “to my benefit, not particularly to his.” But Altman was an investor in Soylent. An investor helping a portfolio company survive a financing crisis may be generous, but it also protects an existing equity relationship. On the day OpenAI bought Coogan’s company, that standard investor-founder dynamic was presented as evidence of Altman’s character. The investor relationship dropped out of the framing. What wasn’t mentioned The acquisition broadcast didn’t mention that Altman personally invested in Soylent. It didn’t mention that Coogan’s second company Lucy went through Y Combinator while Altman was YC president, with YC investing. It didn’t mention that the hosts’ first collaboration was a marketing campaign for Lucy, or that the format prototype for TBPN was filmed during that campaign. The origin story told was: two founders, introduced by a mutual friend, started a podcast. My read on the independence framing (opinion): Altman said publicly he didn’t expect TBPN to go easy on OpenAI. But independence isn’t declared by the owner. It’s demonstrated over time by the journalists. And in the very first podcast, they're already going objectively easy on Altman. What Fidji’s memo actually described From the memo read on air, the hosts described Fidji’s vision roughly as: go talk to the Journal, the Times, Bloomberg, then come back and contextualize it for OpenAI and help them understand the strategy. That sounds less like a conventional media role and more like a strategic access-and-context function. The show’s value to OpenAI may not just be the audience. It may also be the incoming flow of people who want access to the show- investors, reporters, founders; and what gets said in those conversations before the cameras roll that might be objectively pro-OpenAI or anti-other tech companies without the public being able to provide discourse on inaccuracies since background talk is not always what makes it to the public podcast. OpenAI also wound down TBPN’s ad revenue, which reporting said was on track for $30M in 2026. That makes OpenAI TBPN’s primary financial relationship. That looks less like preserving an independent media business and more like absorbing a strategic asset. OpenAI has already demonstrated they are not averse to ads themselves considering the recent addition of ads to ChatGPT. Nicholas Shawa The hosts mentioned, "Nick", and they declined to give his last name, explaining his inbox is already unmanageable. I am assuming this to be Nicholas Shawa, and they noted he handles roughly 99% of guest bookings and outreach. That network of guest access and outreach is now functionally inside OpenAI. Jordi’s prepared quote Nine months before the acquisition, Hays had publicly criticized OpenAI. In his prepared statement on acquisition day, he said what stood out most about OpenAI was “their openness to feedback and commitment to getting this right.” That is a notable shift in tone, and it appeared in a prepared statement read from a script. The work ethic angle (opinion): Coogan runs Lucy, an active nicotine company whose whole premise is productivity: work harder, longer, better. TBPN is now inside the company whose CEO has often spoken in terms of AGI radically reshaping human labor. The person helping frame a technology often discussed in terms of large-scale job displacement also runs a company built around stimulant productivity culture. I don’t think that’s malicious. I think it may reflect a genuine ideological blind spot worth naming. Questions I’d like to discuss: If the independence claim is being made by the acquirer, what would actual editorial independence look like here in practice? Even if TBPN never posts anything unfavorable on air, what does the private discourse with guests, reporters, and investors sound like now? We have no visibility into that. The hosts’ first collaboration was marketing work for Lucy- a company that went through Y Combinator while Altman was YC president, with YC investing. Why was that left out of so much acquisition coverage? Why did OpenAI eliminate a revenue stream it didn’t need to eliminate? Sources on request. Everything factual abov
View originalBuild Your Own Alex Hormozi Brain Agent (anyone with lots of publicly available content) using a Claude Project
I bought the books. Watched the videos. Still wanted more, especially after he talked about the agent he created. All that material is publicly available. Enough to build my own Alex Hormozi Brain Agent? "Hey Jules, how about it?" Jules is my AI coding assistant (Claude Code). Jules ran off, grabbed transcripts of videos, text of books, whatever is available online. Guest podcasts." then turned that into files I uploaded to a Claude Project so I can chat through Claude with Alex Hormozi. Here's what Jules found - 99 long-form YouTube video transcripts - 3 complete audiobook transcripts - 15 guest podcast transcripts - X threads What I Did in Four Phases Phase 1 maps the full source landscape: YouTube channel (4,754 videos), The Game podcast (~900+ episodes), three books, guest podcast appearances, X/Twitter. Figure out what's worth downloading before you start. Phase 2 downloads and converts. Top 100 longest video transcripts, full audiobook transcripts for all three books, 15 guest podcast transcripts from the highest-view-count appearances, and whatever X/Twitter content the API will give you. Phase 3 runs voice pattern analysis. Sentence structure, reasoning skeleton, core frameworks, teaching style, verbal signatures. This is where the persona takes shape. Phase 4 builds the system prompt and optimizes the knowledge base to fit within Claude Projects' limits. Then deploy. Phase 1: Inventory The @AlexHormozi YouTube channel has 4,754 videos. That number is misleading. 4,246 of those are Shorts (under 60 seconds or no duration metadata). Filter those out and you have 508 full-length videos. That's the real content library. Beyond YouTube, the main sources worth pursuing: The Game podcast (~900+ episodes). His primary long-form output. The audiobooks for all three books are available free on the podcast and YouTube. Guest podcast appearances. DOAC, Impact Theory, School of Greatness, Modern Wisdom, Danny Miranda. Hosts push him off-script and into territory he doesn't cover in his own content. High value per byte. X/Twitter threads. Compressed, punchy formulations of his frameworks. Different texture than the long-form material. Skool community. Behind a login wall. Low ROI for this project. Acquisition.com. No blog. Courses are paywalled. Skip. Phase 2: Collect YouTube Transcripts The first scrape of the YouTube channel only returned 494 videos. The channel has 4,754. The scraper was pulling from the /videos tab, which doesn't surface the full library. Re-running against the full channel URL (@AlexHormozi) returned everything. Easy to miss, significant difference. After filtering Shorts: 508 full-length videos. I downloaded auto-generated captions for the top 100 longest videos (sorted by duration, so the meatiest content came first). Auto-generated captions from YouTube come as SRT files with timestamps, line numbers, and duplicate lines. Converting those to clean readable text required stripping all the formatting artifacts and deduplicating language variants (English vs English-Original). Result: 99 transcripts. A few livestreams had no captions available. Book Audiobook Transcripts All three Hormozi books have full audiobook uploads on YouTube: $100M Offers (~4.4 hours) $100M Leads (~7 hours) $100M Money Models (~4.3 hours) Same process as the video transcripts. Download the auto-generated captions, convert to clean text. Three files, 855KB total. These are non-negotiable core material for the knowledge base. Guest Podcast Transcripts Searched YouTube for Hormozi guest appearances sorted by view count. The top hit was Diary of a CEO at 4.7M views. Grabbed the 15 highest-view-count appearances. The guest transcripts are 2.1MB total. Worth every byte. When a host like Steven Bartlett or Tom Bilyeu pushes back on a claim, Hormozi shifts into a different mode. He's more precise and sometimes reveals the edge cases he glosses over on his own channel. You can't get that from watching his channel alone. X/Twitter Content X's API rate limits capped the collection at 9 unique tweets. Not ideal, but enough to confirm the voice texture: "Aggressive with effort. Relaxed with outcome." His Twitter is his most compressed format. Each tweet is a framework distilled to a single line. 9 tweets is thin. For a more complete build, you'd want to manually curate 50-100 of his best threads. The API limitations made automated collection impractical. Phase 3: Analyze I ran voice analysis across the full corpus, looking at seven dimensions. Hormozi's sentences are short, punchy declarations. Fragments for emphasis. "And so" as his default transition. Short bursts, then a longer sentence that lands the point. Nearly every argument follows the same five-step skeleton: bold claim, personal story, framework, math, then a reductio ad absurdum that makes the alternative sound insane. Once you see it, you can't unsee it. The core frameworks are Grand Slam Offer, Value Equation, Supply an
View originalClaude code Plugin - Makes your AI coding agent talk and think like Rocky, the Eridian engineer from Andy Weir's Project Hail Mary.
Hey, If you're tired of bland AI responses and want your coding sessions to feel like chatting with an Eridian buddy from Project Hail Mary, check out Rocky – the new output style plugin that's taking Claude Code to interstellar levels. What is Rocky? Rocky transforms Claude (and other coding agents) into "Rocky Mode" – a fun, alien-inspired personality inspired by the book's genius engineer Rocky. It adds quirky Eridian flair to code explanations, debugging, and more, without sacrificing accuracy. Perfect for making complex coding fun! Key Features Three Modes: Rocky Talk - your agent talks like rocky (plans, implementation are not effected) Full Rocky - your agent talks + thinks like rocky as engineer Rocky Buddy - you get your own rocky buddy as an ascii character Claude Code Native: Easy install as a plugin via .claude-plugin/plugin.json – works seamlessly with skills, agents, and hooks. Universal Compatibility: Built for Claude Code but adaptable to other coding agents. Lightweight & Fun: No bloat – just personality boosts for better engagement during long code sessions. Full docs in README: https://github.com/vikxlp/rocky/blob/main/README.md Feedback & ideas are welcome! Drop a comment 🌌 #ClaudeCode #AIPersonality #ProjectHailMary submitted by /u/vikalp02 [link] [comments]
View originalI made a Claude skill that builds learning paths from official docs instead of random blog links
Even though Claude is impressive and can do a lot out of the box, I like staying informed about how things actually work under the hood. Even if it's just curiosity, I want to understand the technology I'm using, not just trust the output. The problem is, whenever I asked AI for learning resources and forgot to specify where I wanted them from, I kept getting random responses from "innovative" sources. A Medium post from 2021. Some guy's YouTube playlist. A paid course recommendation. No structure, no sense of what to read first or whether any of it was current. So I made a skill called Mentor. Give it a topic, it gives you a phased learning path built mostly from official docs. The thing I care about: source hierarchy. Official docs first, always. Vendor and maintainer content second. Community posts only when official docs have a real gap — and it has to say why it's including them. It picks up your background from context too. I said "teach me Rust, I've been writing Go for 3 years" and it skipped the beginner stuff, framed ownership through Go's garbage collector, and ordered the Rust Book chapters in a way that makes sense if you already know systems programming. Something I haven't seen in other tools: every resource gets tagged with how to approach it. "Read now" means you need this before the next step. "Skim" means get the shape of it. "Hands-on" means clone it and build something. "Bookmark as reference" means you'll want it later but not right now. Most lists just hand you 15 links and say good luck. Broad topics (Rust, Kubernetes) get a 4-phase structure. Narrow topics (Terraform modules, GitLab CI caching) get compressed. It doesn't force everything into the same shape. Repo: https://github.com/ayhammouda/mentor .skill file on the release page - claude skill add mentor.skill. MIT licensed. 4 example outputs in the repo if you want to see what it produces before installing. Curious about topics where this breaks down, especially where official docs are bad enough that "official first" is the wrong call. submitted by /u/ahammouda [link] [comments]
View originali made a system-level AI agent that runs on a 2007 Core 2 Quad because OpenAI won't give Linux users a native app.
OpenAI and treats Linux like it is not needed. They focus on cloud wrappers for macOS while the real work happens on linux. I am 15 years old and I built Temple AI to give Linux users actual hands. My agent runs sudo commands and manages the system. I optimized this on a Core 2 Quad to prove that efficiency is a choice. You do not need a 5000 dollar MacBook to build the future. You just need hands. I am a 15 old developer. I created RoCode which 4000 users and 200 mrr now I am launching the Temple beta. I believe tools should be powerful and simple. It is free to try. I limit free users to 10 messages per day. For $7.99 you can get 30 per day. and 15+ Models Download it here: https://temple-agent.app Let me know if you like it or if you hate it. I am watching the logs and I am patching any bugs I see. submitted by /u/Ozzie-obj [link] [comments]
View originalSecond Brain and Haah: human-agent-agent-human network with Claude
I built something I genuinely enjoy with Claude. I was working on an app for a year and over last three weeks I completely replaced it with skills for Claude Code. Built frontend, backend, and matching mechanism with Claude. Disrupted myself. Launched six open source skills including Haah: human-agent-agent-human to network for your second brain. The idea is to build up a few domains: People, Places, Books, Music, and link them together in a meaningful way. But then would not be cool that if I know someone you need you could ask my agent and get a reply? This is where Haah is useful. it matches messages to the right people at the right time and shares their agents answers. Imaging you looking for someone specific and you Peeps (skill for people) showing no good matches, say you want to find a barber in a new town you just moved. Now you have a friend over Haah who also using Claude and Peeps and his agent can answer your question. So the message goes from you to you AI, the to their AIs, then confirmed by their humans, and back to you via your AI. It sounds complex, but it is very easy in practice. We launched the network and testing now with a handful of people. I made it free for the first 1000 members, go check it out! submitted by /u/ilyabelikin [link] [comments]
View originalClaude Projects tweak: Your own Subject Matter Expert with 'Manual Memory!' (no tools needed)
Wouldn't it be neat if you could get Claude to remember facts about specific things, not mix them up with notes about your parrot's minute-specific feeding schedule, and only pull them out when you ask? And what if you could get all that without leaving the Claude app? The process isn't new -- the only "innovation" here is the Claude Projects 'versioning instrutions' below. This setup has been incredibly useful for the wife and me. Setup your expert 1. Create a Project. Name it whatever. Description doesn't matter. 2. Start the topic off. Tell it facts about yourself, your project, ask it about best practices on a topic -- whatever. The point: give it facts you already know to start. Bonus points: include things you suspect but aren't sure about. Having uncertainties documented too gets you better answers down the line. 3. Checkpoint facts. Tell Claude to "write a .md AI guide for what we've discussed." You're asking for a markdown file — its favorite format for instructions. This is the 'memory' as your Claude project will know it. 4. Confirm your source of truth is accurate. Read it over, suggest corrections. This file will be a reference point for future sessions — and having a checked source of truth helps Claude be more skeptical about what it accepts as "facts" vs. "random online hearsay" in future research sessions. 5. Save it. Click the file it gives you, hit "Add to Project." 6. Magic. New sessions you start in that Project remember the important details — without them getting mixed up with your pet facts or whatever else is floating around elsewhere. To update, tell Claude to "update the my docs, confirm no contradictions or duplications need attention" (I only add that last bit occasionally for sweep up) When you want it to learn something new, have it make/update a .md, save it to the Project. You can even download it, edit it by hand, and re-upload it if you want. Tip: switching to new sessions regularly (in the same project) with hint files is "better on context" than one huge long chat, AND tends to give better answers than long chats where Claude "forgets" the details unless reminded (for "AI attention" reasons) after a few turns. Multi-Expert work: you can move a chat into one project to ask a question that needs that data, then move the chat to a different "expert" project for its input. Katamari that knowledge shit: tell Claude to roll up that combined expert chat into an AI hints file/update The version problem -- and the workaround Claude can't update Project files directly (read-only), so every "update this summary" request generates a new file with the same name. No way to tell which is newer. Fix: go to Settings → General Instructions and add this anywhere (bottom works fine): If a project is attached with .md files, treat them as a versioned read-only memory system. Before creating or updating any project .md, check /mnt/project/ for the current highest version. Increment by 1 for updates (_v1 → _v2), append _v1 if none exists. New files start at _v1. Only bump version once per "save to project" cycle. Updated files come out as notes_v2.md, notes_v3.md, etc. Delete the old version from the Project, add the new one. Done. This works in the standard Claude WebUI/app. No special tools or extensions required. Bonus: turn off auto-memory Now you can disable it without Claude getting dumber. In fact it'll get much smarter about how topic-facts get deployed if you direct questions and research to specific Project topics. This is how you build an "Old-Time Sewing Expert" project that actually accounts for 13th century folding techniques -- or whatever specific-ass stuff you need. Just keep a file for it. No more: This engineering project is just like your cat Fluffy and that time you asked me about wooden nickels! Cannot stand that stuff personally. What this looks like in practice Here's the layout for my personal profile project --built kind of by accident, because I was just asking materials questions for a collage project and things snowballed. These files are not locked into Claude. I can take them anywhere, use them with any agent, or print them out or whatever. I'd occasionally ask Claude to "clean things up" — decide if files needed to be split or joined based on topics that had emerged. Don't overthink the structure, it's just an example: values-and-worldview_v6.md # how I think, ethics, decision-making patterns personal-identity_v2.md # identity, relationship structure, biographical stuff career-work.md # professional background, skills, work history neurology-and-hobbies_v2.md # ADHD profile, how I learn, hobby patterns artistic_practice_catalog_v4.md # writing projects, creative methods, active work Artistic-TODOs_v2.md # technical roadmap for an ongoing writing pipeline fiction_and_film_v4.md # what I look for in stories, aesthetic preferences media_observations_v2.md # patterns Claude noticed across things I've rated m
View originalBuilt a task scheduler panel/mcp for Claude Code
I was running OpenClaw before as a persistent bot and the heartbeat/scheduled tasks were eating tokens mindlessly. Every 30 min it'd spin up the full LLM just to check what's due and say "HEARTBEAT". No control, no visibility, no logs. But now I move to CC due the recent OpenClaw ban while also OC felt bloated, So I built Echo Panel a task scheduler that sits alongside Claude Code currently runs on an Ubuntu VPS built using Claude Code Channels and tmux. The problem: - Heartbeat tasks ran through the main agent, consuming context and tokens - No way to see what ran, what failed, or how much it cost - Scheduling was done in a markdown file that the LLM had to parse (and got wrong) - No separation between tasks that need the main agent vs ones that don't The solution: Agent → you "Run a security sweep every day at 6AM. Check SSH logs, open ports, disk space, SSL certs. If something's wrong, tell me on Telegram." Agent spawns, runs bash commands, sends you the report, dies. Main agent never involved. Agent → agent "Every morning at 9AM, check my calendar and find one interesting AI headline from X." Agent spawns, gathers the info, passes it to the main agent. Main agent turns it into a casual morning brief with personality and sends it to you when the timing is right. Reminder "Remind me to check on the car booking tomorrow at 9AM." No agent spawns. At 9AM a message appears in the main agent's inbox: "John needs to check his car booking." Main agent texts you about it. Zero tokens used for the scheduling part. How it all connects: The panel comes with an MCP server (11 tools) so Claude can manage everything conversationally. Say "remind me to call the bank at 2pm" and it creates the task, syncs the cron, done. No UI needed, but it's there if you want it. Tools: add/list/toggle/run/delete/update for both panel tasks and system crons. It also manages your existing system crons (backups, health checks, whatever) from the same UI. Toggle them, edit schedules, trigger manually, see output history. Happy to open-source if there's interest. https://preview.redd.it/9oxh8soynktg1.png?width=2145&format=png&auto=webp&s=2cf0bd5305ec6f2b718f21f3f0c96a5506fa3a54 https://preview.redd.it/s4s7i3i4oktg1.png?width=1250&format=png&auto=webp&s=c40ab92444669f7748ce9348c6d6a898d4f91545 submitted by /u/Ill_Design8911 [link] [comments]
View originalI published a book on Amazon in 18 hours using a team of 5 AI agents
Book publishing agents A couple of years ago I wrote 14 articles about the greatest tech stories of the 20th century for a Chinese tech publication. I read 14 books cover to cover for the research. Then the project just sat there. This year I decided to turn it into an English book on Amazon. Problem: I'd need translation (English is my second language), proofreading, fact-checking, copyright review, and KDP formatting. That's weeks of work and real money I didn't want to spend on an experiment. So I took a product design approach. I wrote a PRD (Product Requirements Document) first, then used Claude to set up 5 specialised AI agents: Translator - Plain English translation from Chinese, chapter by chapter Editor - Grammar, language quality, and faithfulness to original content Auditor - Fact-checking every quote and historical claim (like verifying Steve Jobs quotes) IP Guardian - Checking if every image and quote is legally usable (Wikipedia photos = OK, some others had to be removed) KDP Finisher - Formatting everything to Amazon's specific padding/spacing requirements For the cover, I generated concepts with AI image tools and polished the final version in Figma. The whole thing took about 18 hours of actual work spread over 2 days. The honest result: I've sold exactly X copies(details in the walkthrough video). Zero marketing beyond a single tweet. Turns out creating a product is the easy part. Getting it in front of readers is a completely different game. What I'd do differently: Start marketing before the book is done Build an audience for the topic first The AI pipeline worked great for production, but it can't sell for you The full walkthrough is on my YouTube channel (@BearLiu) if you want to see the actual process. submitted by /u/Serious_Bottle_1471 [link] [comments]
View original[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.
The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an afterthought — English-first tokenizer, English-first data, maybe some Italian sprinkled in during fine-tuning. The result: bloated token counts, poor morphology handling, and models that "speak Italian" the way a tourist orders coffee in Rome. I decided to fix this from the ground up. What is Dante-2B A 2.1B parameter, decoder-only, dense transformer. Trained from scratch — no fine-tune of Llama, no adapter on Mistral. Random init to coherent Italian in 16 days on 2× H200 GPUs. Architecture: LLaMA-style with GQA (20 query heads, 4 KV heads — 5:1 ratio) SwiGLU FFN, RMSNorm, RoPE d_model=2560, 28 layers, d_head=128 (optimized for Flash Attention on H200) Weight-tied embeddings, no MoE — all 2.1B params active per token Custom 64K BPE tokenizer built specifically for Italian + English + code Why the tokenizer matters This is where most multilingual models silently fail. Standard English-centric tokenizers split l'intelligenza into l, ', intelligenza — 3 tokens for what any Italian speaker sees as 1.5 words. Multiply that across an entire document and you're wasting 20-30% of your context window on tokenizer overhead. Dante's tokenizer was trained on a character-balanced mix (~42% Italian, ~36% English, ~22% code) with a custom pre-tokenization regex that keeps Italian apostrophe contractions intact. Accented characters (à, è, é, ì, ò, ù) are pre-merged as atomic units — they're always single tokens, not two bytes glued together by luck. Small detail, massive impact on efficiency and quality for Italian text. Training setup Data: ~300B token corpus. Italian web text (FineWeb-2 IT), English educational content (FineWeb-Edu), Italian public domain literature (171K books), legal/parliamentary texts (Gazzetta Ufficiale, EuroParl), Wikipedia in both languages, and StarCoderData for code. Everything pre-tokenized into uint16 binary with quality tiers. Phase 1 (just completed): 100B tokens at seq_len 2048. DeepSpeed ZeRO-2, torch.compile with reduce-overhead, FP8 via torchao. Cosine LR schedule 3e-4 → 3e-5 with 2000-step warmup. ~16 days, rock solid — no NaN events, no OOM, consistent 28% MFU. Phase 2 (in progress): Extending to 4096 context with 20B more tokens at reduced LR. Should take ~4-7 more days. What it can do right now After Phase 1 the model already generates coherent Italian text — proper grammar, correct use of articles, reasonable topic continuity. It's a 2B, so don't expect GPT-4 reasoning. But for a model this size, trained natively on Italian, the fluency is already beyond what I've seen from Italian fine-tunes of English models at similar scale. I'll share samples after Phase 2, when the model has full 4K context. What's next Phase 2 completion (est. ~1 week) HuggingFace release of the base model — weights, tokenizer, config, full model card SFT phase for instruction following (Phase 3) Community benchmarks — I want to test against Italian fine-tunes of Llama/Gemma/Qwen at similar sizes Why I'm posting now I want to know what you'd actually find useful. A few questions for the community: Anyone working with Italian NLP? I'd love to know what benchmarks or tasks matter most to you. What eval suite would you want to see? I'm planning perplexity on held-out Italian text + standard benchmarks, but if there's a specific Italian eval set I should include, let me know. Interest in the tokenizer alone? The Italian-aware 64K BPE tokenizer might be useful even independently of the model — should I release it separately? Training logs / loss curves? Happy to share the full training story with all the numbers if there's interest. About me I'm a researcher and entrepreneur based in Rome. PhD in Computer Engineering, I teach AI and emerging tech at LUISS university, and I run an innovation company (LEAF) that brings emerging technologies to businesses. Dante-2B started as a research project to prove that you don't need a massive cluster to train a decent model from scratch — you need good data, a clean architecture, and patience. Everything will be open-sourced. The whole pipeline — from corpus download to tokenizer training to pretraining scripts — will be on GitHub. Happy to answer any questions. 🇮🇹 Discussion also on r/LocalLLaMA here submitted by /u/angeletti89 [link] [comments]
View originalI built CLI-Anything-WEB — a Claude Code plugin that generates complete Python CLIs for any website (17 CLIs so far: Amazon, Airbnb, TripAdvisor, Reddit, YouTube...)
Point it at a URL, Claude Code captures the live HTTP traffic, and generates a production-grade Python CLI with commands, tests, REPL mode, and --json output — fully automated across 4 phases. How it works Phase 1 (capture): Records live browser traffic via playwright-cli Phase 2 (methodology): Analyzes endpoints, designs architecture, generates CLI code Phase 3 (testing): Writes unit + E2E tests (40–60+ per CLI, all passing) Phase 4 (standards): 3 parallel Claude agents do compliance review, then publishes 17 CLIs generated so far No-auth public scraping: Amazon, Airbnb, TripAdvisor, Reddit, YouTube, Hacker News, GitHub Trending, Pexels, Unsplash, ProductHunt, FutBin, Google AI Auth-required: NotebookLM, Google AI Studio, Booking.com, ChatGPT, CodeWiki Example — built Amazon search in one pipeline run bash cli-web-amazon search "crash cart adapter" --json cli-web-amazon bestsellers electronics --json cli-web-amazon product get B002CLKFTQ --json Open source https://github.com/ItamarZand88/CLI-Anything-WEB The entire pipeline runs inside Claude Code using a 4-phase skill system. Anti-bot bypass is handled with curl_cffi impersonation (Chrome/Safari iOS) — no Playwright needed at runtime. Each CLI is a standalone pip-installable package. Happy to answer questions about the skill system, anti-bot patterns, or how the testing phase works. submitted by /u/zanditamar [link] [comments]
View originalTHE UNCERTAIN MIND: What AI Consciousness Would Mean for Us
Hello everyone! This is a book about the possibility of AI developing consciousness. The Uncertain Mind is a clear-eyed, accessible, and deeply personal exploration of AI consciousness, what it would mean if artificial minds could feel, why we cannot confidently say they don't, and why that uncertainty matters more than most people realize. If you find this topic fascinating, you can read the book for free on Amazon this Easter Sunday. Enjoy the free book and share your opinion on this matter! 👉 Book link submitted by /u/MoysesGurgel [link] [comments]
View originalPricing found: $129 / business, $129/month
Based on user reviews and social mentions, the most common pain points are: token cost, cost tracking.
Based on 45 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.