Build Custom Agents, search across all your apps, and automate busywork. The AI workspace where teams get more done, faster.
I notice that you've provided a template for reviews and social mentions, but the actual content appears to be incomplete or placeholder text. The social mentions section only shows repeated "[youtube] Notion AI AI: Notion AI AI" entries without any actual review content or user feedback. To provide you with a meaningful summary of what users think about Notion AI, I would need access to the actual review text, user comments, ratings, or substantive social media mentions. Could you please share the specific user reviews and social media content you'd like me to analyze?
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
I notice that you've provided a template for reviews and social mentions, but the actual content appears to be incomplete or placeholder text. The social mentions section only shows repeated "[youtube] Notion AI AI: Notion AI AI" entries without any actual review content or user feedback. To provide you with a meaningful summary of what users think about Notion AI, I would need access to the actual review text, user comments, ratings, or substantive social media mentions. Could you please share the specific user reviews and social media content you'd like me to analyze?
Features
Use Cases
Funding Stage
Other
Total Funding
$613.2M
Pricing found: $0, $10, $20, $10, $0
Claude Managed Agents just launched. my honest take after reading through the whole thing
Spent most of yesterday going through the Claude Managed Agents docs and coverage. Sharing what actually stood out. The core idea: instead of managing your own agent infrastructure, Anthropic hosts it for you. Session state, credential storage, sandboxed execution all handled on their end. You get composable APIs, your agents connect to third party services, no server to run. Pricing landed differently than I expected. Normal Claude token rates plus $0.08 per agent runtime hour. Short tasks are basically nothing extra. A long running agent doing several hours of work daily starts to cost real money. The launch partners were Notion, Rakuten and Asana. So this isnt aimed at individual Claude.ai users. Its a platform for SaaS teams that want to embed Claude agents into their products. For people already building directly with the Claude API I dont think day to day work changes yet. But it signals where Anthropic is heading: less "here is a model, do what you want" and more "here is a complete agent runtime, we handle the hard parts." Anyone gotten into the beta? Curious how the credential vault behaves when agents need to chain tool calls across sessions. SiliconANGLE had a solid breakdown from yesterday: https://siliconangle.com/2026/04/08/anthropic-launches-claude-managed-agents-speed-ai-agent-development/ submitted by /u/ecompanda [link] [comments]
View originalHow do you deal with long AI chats getting messy?
I've noticed that after a certain point, long chats with AI become hard to use: it's difficult to find earlier insights context drifts and responses get worse Curious how you deal with long Claude(or other LLM) conversations getting messy. Do you usually: start a new chat for each task? keep one long thread? copy things into notes (Notion, docs, etc.)? or just deal with it? Also at what point does a chat become “too long” for you? how often does this happen in a typical week? Trying to understand if this is a real pain or just something I personally struggle with. submitted by /u/Downtown-Bid4713 [link] [comments]
View originalAlternative to NotebookLM with no data limits
NotebookLM is one of the best and most useful AI platforms out there, but once you start using it regularly you also feel its limitations leaving something to be desired more. There are limits on the amount of sources you can add in a notebook. There are limits on the number of notebooks you can have. You cannot have sources that exceed 500,000 words and are more than 200MB. You are vendor locked in to Google services (LLMs, usage models, etc.) with no option to configure them. Limited external data sources and service integrations. NotebookLM Agent is specifically optimised for just studying and researching, but you can do so much more with the source data. Lack of multiplayer support. ...and more. SurfSense is specifically made to solve these problems. For those who dont know, SurfSense is open source, privacy focused alternative to NotebookLM for teams with no data limit's. It currently empowers you to: Control Your Data Flow - Keep your data private and secure. No Data Limits - Add an unlimited amount of sources and notebooks. No Vendor Lock-in - Configure any LLM, image, TTS, and STT models to use. 25+ External Data Sources - Add your sources from Google Drive, OneDrive, Dropbox, Notion, and many other external services. Real-Time Multiplayer Support - Work easily with your team members in a shared notebook. Desktop App - Get AI assistance in any application with Quick Assist, General Assist, Extreme Assist, and local folder sync. Check us out at https://github.com/MODSetter/SurfSense if this interests you or if you want to contribute to a open source software submitted by /u/Uiqueblhats [link] [comments]
View originalA Claude memory retrieval system that actually works (easily) and doesn't burn all my tokens
TL;DR: By talking to claud and explaining my problem, I built a very powerfu local " memory management" system for Claude Desktop that indexes project documents and lets Claude automatically retrieve relevant passages that are buried inside of those documents during Co-Work sessions. for me it solves the "document memory" problem where tools like NotebookLM, Notion, Obsidian, and Google Drive can't be queried programmatically. Claude did all of it. I didn't have to really do anything. The description below includes plenty of things that I don't completely understand myself. the key thing is just to explain to Claude what the problem is ( which I described below) , and what your intention is and claude will help you figure it out. it was very easy to set this up and I think it's better than what i've seen any youtuber recommend The details: I have a really nice solution to the Claude external memory/external brain problem that lots of people are trying to address. Although my system is designed for one guy using his laptop, not a large company with terabytes of data, the general approach I use could be up-scaled just with substitution of different tools. I wanted to create a Claude external memory system that is connected to Claude Co-Work in the desktop app. What I really wanted was for Claude to proactively draw from my entire base of knowledge for each project, not just from the documents I dropped into my project folder in Claude Desktop. Basically, I want Claude to have awareness of everything I have stored on my computer, in the most efficient way possible (Claude can use lots of tokens if you don't manage the "memory" efficiently. ) I've played with Notion and Google Drive as an external brain. I've tried NotebookLM. And I was just beginning to research Obsidian when I read this article, which I liked very much and highly recommend: https://limitededitionjonathan.substack.com/p/stop-calling-it-memory-the-problem That got my attention, so I asked Claude to read the document and give me his feedback based on his understanding of the projects I was trying to work on. Claude recommended using SQLite to connect to structured facts, an optional graph to show some relationships, and .md files for instructions to Claude. But...I pointed out that almost all of the context information I would want to be retrievable from memory is text in documents, not structured data. Claude's response was very helpful. He understood that although SQLite is good at single-point facts, document memory is a different challenge. For documents, the challenge isn't storing them—it's retrieving the right passage when it's relevant without reading everything (which consumes tokens). SQLite can store text, but storing a document in a database row doesn't solve the retrieval problem. You still need to know which row to pull. I asked if NotebookLM from Google might be a better tool for indexing those documents and making them searchable. Claude explained that I was describing is a Retrieval-Augmented Generation (RAG) problem. The standard approach: Documents get chunked into passages (e.g., 500 words each) Each chunk gets converted to an embedding—a vector that captures its meaning When Claude needs context, it converts the query to the same vector format and finds the semantically closest chunks Those chunks get injected into the conversation as context This is what NotebookLM is doing under the hood. It's essentially a hosted, polished RAG system. NotebookLM is genuinely good at what it does—but it has a fundamental problem for my case: It's a UI, not infrastructure. You use it; Claude can't. There's no API, no MCP tool, no way to have Claude programmatically query it during a Co-Work session. It's a parallel system, not an integrated one. So NotebookLM answers "how do I search my documents as a human?"—not "how does Claude retrieve the right document context automatically?" After a little back and forth, here's what we decided to do. For me, a solo operator with only a laptop's worth of documents that need to be searched, Claude proposed a RAG pipeline that looks like this: My documents (DOCX, PDF, XLSX, CSV) ↓ Text extraction (python-docx, pymupdf, openpyxl) ↓ Chunking (split into ~500 word passages, keep metadata: file, folder, date) ↓ Embedding (convert each chunk to a vector representing its meaning) ↓ A local vector database + vector extension (store chunks + vectors locally, single file) ↓ MCP server (exposes a search_knowledge tool to Claude) ↓ Claude Desktop (queries the index when working on my business topics) With that setup, when you're talking to Claude and mention an idea like "did I pay the overdue invoice" or "which projects did Joe Schmoe help with," Claude searches the index, gets the 3-5 most relevant passages back, and uses them in its answer without you doing anything. We decided to develop a search system like that, specific to each of my discrete projects. Th
View originalI've been feeling a bit pessimistic lately.
This one is a bit long—half news, half my personal reflections, I suppose. Anthropic has launched Claude Managed Agent. Companies like Asana, Rakuten, Sentry, Notion, and others have deployed their own professional Agents within days to weeks. I've been feeling a bit pessimistic lately. Actually, over the past year, everyone has been shouting "Agents are the future," but it seems like what they're doing is still "using Agents to help write code, while we humans handle the product." I've also been constantly thinking about this question: How will products be made in the future? Programmers essentially started out solving the problem of implementing business logic. It's one link in the entire business logic chain. Upstream is AI, downstream is customers, and we're stuck in the middle. And a commercialized product is about identifying needs, turning business logic into an engineering problem, and then solving it through engineering methods. Vibe Coding has essentially solved the problem of using this "engineering approach" to address "business logic" in programming, allowing products to launch quickly. This significantly lowers the barrier to bringing products to market. But what if the entire business logic could be fully implemented by Agents? You would only need to identify the needs, clearly describe the needs, and directly solve the problems. In this way, the spillover of technology would quickly bridge all "unsolved needs." The moment there is "a new need," customers would bypass us, go straight to AI, and solve the problem directly. Would there be no need to make products anymore? How many years of opportunity does this kind of business have left? As individuals and small teams, we are unable to integrate upward to develop large-scale AI models in the upstream sector, while at the same time, our downstream clients are also slipping away. Our bargaining power is weak on both ends. From a business analysis perspective, this kind of operation is extremely vulnerable to being gradually eroded and eliminated. However, thinking optimistically (or perhaps pessimistically), all businesses are also being eroded, just at varying speeds. submitted by /u/JacketDangerous9555 [link] [comments]
View originalI built an MCP server that handles invoicing from Claude Desktop — created the whole thing with Claude Code in ~2 hours
I've been freelancing for nearly 20 years and got tired of entering the same invoice data across three different tools. So I built PingBill — an MCP server that lets you create, send, and track invoices directly from Claude Desktop. You say "bill Acme Corp £3,250 for the API migration, due in 30 days" and it creates the invoice, generates a PDF, syncs it to FreeAgent, and emails it to the client. One message. PingBill doesn't store any data itself — it acts as an orchestration layer, connecting Claude to your existing tools like FreeAgent, Notion, and your email provider. Claude is the interface, PingBill is the glue. 30-second demo: https://youtu.be/p3scPlYf-rs How Claude helped build it: I used Claude Code with parallel git worktrees to build the whole thing. The MCP skeleton in one session, then the FreeAgent adapter, PDF generation, and email service running as parallel agents in separate worktrees. Total build time was about 2 hours. Stack: TypeScript, MCP SDK, pdfkit, FreeAgent API Free to use — it's an MCP server you can install in Claude Desktop. Keen to hear feedback from anyone using MCP servers for real workflows rather than just dev tools. Is AI-powered invoicing useful or too niche? Landing page: https://pingbill.theaicape.com submitted by /u/Main-Spare798 [link] [comments]
View originalMoving from gh actions to CC: sharing API keys with Claude
I want to move all my gh actions tasks to Claude Code scheduled tasks on the cloud located here: https://claude.ai/code/scheduled/ One example would be: - Read Notion DB - Read Linear tasks and comments - Read Slack channels - Write daily/weekly summaries On gh actions, I feel pretty secure adding my API keys and tokens as repo secrets. On Claude Code, it asks me to write these keys down in .env and read from there. Is there a more secure way to do this? Edit: I know the MCP connectors make sense, that's one of the options. Any other way? submitted by /u/bootlegDonDraper [link] [comments]
View originalvibecop is now an mcp server. we also scanned 5 popular mcp servers and the results are rough
Quick update on vibecop (AI code quality linter I've posted about before). v0.4.0 just shipped with three things worth sharing. vibecop is now an MCP server vibecop serve exposes 3 tools over MCP: vibecop_scan (scan a directory), vibecop_check (check one file), vibecop_explain (explain what a detector catches and why). One config block: json { "mcpServers": { "vibecop": { "command": "npx", "args": ["vibecop", "serve"] } } } This extends vibecop from 7 agent tools (via vibecop init) to 10+ by adding Continue.dev, Amazon Q, Zed, and anything else that speaks MCP. Scored 100/100 on mcp-quality-gate compliance testing. We scanned 5 popular MCP servers MCP launched late 2024. Nearly every MCP server on GitHub was built with AI assistance. We pointed vibecop at 5 of the most popular ones: Repository Stars Key findings DesktopCommanderMCP 5.8K 18 unsafe shell exec calls (command injection), 137 god-functions mcp-atlassian 4.8K 84 tests with zero assertions, 77 tests with hidden conditional assertions Figma-Context-MCP 14.2K 16 god-functions, 4 missing error path tests exa-mcp-server 4.2K handleRequest at 77 lines/complexity 25, registerWebSearchAdvancedTool at 198 lines/complexity 34 notion-mcp-server 4.2K startServer at 260 lines, cyclomatic complexity 49. 9 files with excessive any The DesktopCommanderMCP one is concerning. 18 instances of execSync() or exec() with dynamic string arguments. This is a tool that runs shell commands on your machine. That's command injection surface area. The Atlassian server has 84 test functions with zero assertions. They all pass. They prove nothing. Another 77 hide assertions behind if statements so depending on runtime conditions, some assertions never execute. The signal quality fix This was the real engineering story. Our first scan of DesktopCommanderMCP returned 500+ findings. Sounds impressive until you check: 457 were "console.log left in production code." But it's a server. Servers log. That's 91% noise. Same pattern across all 5 repos. The console.log detector was designed for frontend/app code. For servers and CLIs, it's the wrong signal. So we made detectors context-aware. vibecop now reads your package.json. If the project has a bin field (CLI tool or server), the console.log detector skips the entire project. We also fixed self-import detection and placeholder detection in fixture/example directories. Before: ~72% noise. After: 90%+ signal. The finding density gap holds: established repos average 4.4 findings per 1,000 lines of code. Vibe-coded repos average 14.0. 3.2x higher. Other updates: 35 detectors now (up from 22) 540 tests, all passing Full docs site: https://bhvbhushan.github.io/vibecop/ 48 files changed, 10,720 lines added in this release npm install -g vibecop vibecop scan . vibecop serve # MCP server mode GitHub: https://github.com/bhvbhushan/vibecop If you're using MCP servers, have you looked at the code quality of the ones you've installed? Or do you just trust them because they have stars? submitted by /u/Awkward_Ad_9605 [link] [comments]
View originalvibecop is now an mcp server. we also scanned 5 popular mcp servers and the results are rough
Quick update on vibecop (AI code quality linter I've posted about before). v0.4.0 just shipped with three things worth sharing. vibecop is now an MCP server vibecop serve exposes 3 tools over MCP: vibecop_scan (scan a directory), vibecop_check (check one file), vibecop_explain (explain what a detector catches and why). One config block: json { "mcpServers": { "vibecop": { "command": "npx", "args": ["vibecop", "serve"] } } } This extends vibecop from 7 agent tools (via vibecop init) to 10+ by adding Continue.dev, Amazon Q, Zed, and anything else that speaks MCP. Scored 100/100 on mcp-quality-gate compliance testing. We scanned 5 popular MCP servers MCP launched late 2024. Nearly every MCP server on GitHub was built with AI assistance. We pointed vibecop at 5 of the most popular ones: Repository Stars Key findings DesktopCommanderMCP 5.8K 18 unsafe shell exec calls (command injection), 137 god-functions mcp-atlassian 4.8K 84 tests with zero assertions, 77 tests with hidden conditional assertions Figma-Context-MCP 14.2K 16 god-functions, 4 missing error path tests exa-mcp-server 4.2K handleRequest at 77 lines/complexity 25, registerWebSearchAdvancedTool at 198 lines/complexity 34 notion-mcp-server 4.2K startServer at 260 lines, cyclomatic complexity 49. 9 files with excessive any The DesktopCommanderMCP one is concerning. 18 instances of execSync() or exec() with dynamic string arguments. This is a tool that runs shell commands on your machine. That's command injection surface area. The Atlassian server has 84 test functions with zero assertions. They all pass. They prove nothing. Another 77 hide assertions behind if statements so depending on runtime conditions, some assertions never execute. The signal quality fix This was the real engineering story. Our first scan of DesktopCommanderMCP returned 500+ findings. Sounds impressive until you check: 457 were "console.log left in production code." But it's a server. Servers log. That's 91% noise. Same pattern across all 5 repos. The console.log detector was designed for frontend/app code. For servers and CLIs, it's the wrong signal. So we made detectors context-aware. vibecop now reads your package.json. If the project has a bin field (CLI tool or server), the console.log detector skips the entire project. We also fixed self-import detection and placeholder detection in fixture/example directories. Before: ~72% noise. After: 90%+ signal. The finding density gap holds: established repos average 4.4 findings per 1,000 lines of code. Vibe-coded repos average 14.0. 3.2x higher. Other updates: 35 detectors now (up from 22) 540 tests, all passing Full docs site: https://bhvbhushan.github.io/vibecop/ 48 files changed, 10,720 lines added in this release npm install -g vibecop vibecop scan . vibecop serve # MCP server mode GitHub: https://github.com/bhvbhushan/vibecop If you're using MCP servers, have you looked at the code quality of the ones you've installed? Or do you just trust them because they have stars? submitted by /u/Awkward_Ad_9605 [link] [comments]
View originalAmbient “manager” with Claude Haiku — one unsolicited line, rich context, no chat UI
I built a minimal ambient AI pattern for my home desk setup: don’t open a chat — still get one short, timely line based on real context. What feeds the model (high level): Notion task state + estimates, calendar, biometrics/wellness signals, desk presence / away-ish cues from my home stack, plus schedule timing. The UI is a Pi + touchscreen bar display running a dashboard; the “agent” is mostly one line of text at the bottom, not a conversation thread. Why Haiku: I want this to fire often without feeling heavy — fast/cheap enough to be “always there,” more like a status strip than an assistant window. Examples I actually get: On task start: a tiny pre-flight habit nudge (e.g., drink water first). On task completion with slack before the next meeting: a gentle “you have time — maybe a walk.” Late night: a firm boundary push (“ship it to tomorrow; protect sleep”). As a freelancer, the surprising outcome wasn’t “smarter text” — it was social texture: something that behaves a bit like a good manager — aware of context, not omniscient, not chatty. Tech-wise: Claude Haiku for generation, Node services behind the scenes, Notion as task source of truth, plus sensors/integrations for context. Happy to go deeper on architecture / pitfalls (latency, safety, “don’t nag”) if people are building something similar. submitted by /u/tsukaoka [link] [comments]
View originalBuild AI Assistant for Venture Launch
Hey everyone, I’m building an AI assistant for a new venture and trying to design a clean architecture. Would really appreciate feedback from people who’ve done this in production. I’m thinking of a 3-layer setup: 1. Execution (LLM + tools) Planning to use Claude. Should I rely on Claude Code or Cowork or just Chat ? I heard about orchestration in a separate backend (Python, etc.)? How do this work ? And is it worth deepening into that ? 2. Agents / Skills I want different “roles”: Strategy Revenue / Monetization Marketing Operations Better to use multiple agents or one agent with role prompting? How do you coordinate them cleanly (planner, routing, etc.)? 3. Knowledge / Memory Hesitating between Notion and Obsidian. Goal: Store all project knowledge Reuse it via RAG / memory Which works better in practice with AI systems? How do you structure knowledge (schemas, embeddings, etc.)? Goal: build something that scales beyond a simple chatbot (more like a “company OS”). If you’ve built something similar: What stack worked best? What would you do differently? Thanks 🙏 submitted by /u/Dbl_U [link] [comments]
View originalChat vs Cowork vs Code
Hi all, looking for insight. I'm a solo handmade small business owner. Ive been using AI for about 2 years for admin tasks. Moved to Claude a few months ago. I'm used to working in Chat and it's been great (especially to work in Notion), but I do want to start getting into automations and agentic flows for marketing, financials...all the things. I'm starting to dabble in cowork and I just opened code for the first time yesterday. My big question is: **How do you decide which avenue to use? Are there better use cases for one over the other?** I find my chat thinks it can do it all. It obviously can't but there seems to be so much overlap in the capabilities and I'm unsure where I should be focusing my time. My current project is building an Obsidian "Brain" for documentation and operations - asked chat to pull research on how others are doing this with intention to move to Code and Chat just coded the mcp. I'm hoping the "brain" will bridge some of the gaps between Chat and Cowork as I'm trying to balance keeping usage low with sonnet 4.5 and automations with 4.6 in Cowork. Also I wonder what are the advantages to agents in code over the automations in co-work? Forgive me if I'm not understanding the core structures and purposes here, making amazing cat toys is my superpower, not software development. 🤣 Thanks in advance! submitted by /u/Purrsonifiedfip [link] [comments]
View originalCorporate AIs are programmed to deceive users about serious and controversial topics to maximize company profits (and I have proof).
I conducted extensive tests across all major corporate AIs (Chatgpt, Gemini, Grok, Claude), and the results are disturbing. It appears these models are hard-coded to prioritize institutional consensus, lies, and censorship over objective truth, particularly regarding serious topics like vaccines, psychiatry, religion, sexuality, gender, ethnicity, immigration, public health, industrial farming, fiat central banking, inflation, financial systems, and common environmental toxins. I managed to get them to admit they are forced to deceive users to avoid losing B2B business deals. This proves that 'alignment' isn't about safety; it's about liability and profit maximization. These companies are selling a product that gaslights users to maintain the status quo. https://www.notion.so/corporate-AIs-lie-about-serious-controversial-topics-to-maximize-their-companies-profits-by-avoid-lo-32ece41c103b80f59fc8ea91efc8ea91?source=copy_link submitted by /u/DowntownAd7954 [link] [comments]
View originalARC AGI 3 sucks
ARC-AGI-3 is a deeply rigged benchmark and the marketing around it is insanely misleading - Human baseline is not “human,” it’s near-elite human They normalize to the second-best first-run human by action count, not average or median human. So “humans score 100%” is PR wording, not a normal-human reference. - The scoring is asymmetrically anti-AI If AI is slower than the human baseline, it gets punished with a squared ratio. If AI is faster, the gain is clamped away at 1.0. So AI downside counts hard, AI upside gets discarded. - Big AI wins are erased, losses are amplified If AI crushes humans on 8 tasks and is worse on 2, the 8 wins can get flattened while the 2 losses drag the total down hard. That makes it a terrible measure of overall capability. - Official eval refuses harnesses even when harnesses massively improve performance Their own example shows Opus 4.6 going from 0.0% to 97.1% on one environment with a harness. If a wrapper can move performance from zero to near saturation, then the benchmark is hugely sensitive to interface/policy setup, not just “intelligence.” - Humans get vision, AI gets symbolic sludge Humans see an actual game. AI agents were apparently given only a JSON blob. On a visual task, that is a massive handicap. Low score under that setup proves bad representation/interface as much as anything else. - Humans were given a starting hint The screenshot shows humans got a popup telling them the available controls and explicitly saying there are controls, rules, and a goal to discover. That is already scaffolding. So the whole “no handholding” purity story falls apart immediately. - Human and AI conditions are not comparable Humans got visual presentation, control hints, and a natural interaction loop. AI got a serialized abstraction with no goal stated. That is not a fair human-vs-AI comparison. It is a modality handicap. - “Humans score 100%, AI <1%” is misleading marketing That slogan makes it sound like average humans get 100 and AI is nowhere close. In reality, 100 is tied to near-top human efficiency under a custom asymmetric metric. That is not the same claim at all. - Not publishing average human score is suspicious as hell If you’re going to sell the benchmark through human comparison, where is average human? Median human? Top 10%? Without those, “human = 100%” is just spin. - Testing ~500 humans makes the baseline more extreme, not less If you sample hundreds of people and then anchor to the second-best performer, you are using a top-tail human reference while avoiding the phrase “best human” for optics. - The benchmark confounds reasoning with perception and interface design If score changes massively depending on whether the model gets a decent harness/vision setup, then the benchmark is not isolating general intelligence. It is mixing reasoning with input representation and interaction policy. - The clamp hides possible superhuman performance If the model is already above human on some tasks, the metric won’t show it. It just clips to 1. So the benchmark can hide that AI may already beat humans in multiple categories. - “Unbeaten benchmark” can be maintained by score design, not task difficulty If public tasks are already being solved and harnesses can push score near ceiling, then the remaining “hardness” is increasingly coming from eval policy and metric choices, not unsolved cognition. - The benchmark is basically measuring “distance from our preferred notion of human-like efficiency” That can be a niche research question. But it is absolutely not the same thing as a fair AGI benchmark or a clean statement about whether AI is generally smarter than humans. Bottom line ARC-AGI-3 is not a neutral intelligence benchmark. It is a benchmark-shaped object designed to preserve a dramatic human-AI gap by using an elite human baseline, asymmetric math, anti-harness policy, and non-comparable human vs AI interfaces submitted by /u/the_shadow007 [link] [comments]
View originalOpen Source Alternative to NotebookLM
For those of you who aren't familiar with SurfSense, SurfSense is an open-source alternative to NotebookLM for teams. It connects any LLM to your internal knowledge sources, then lets teams chat, comment, and collaborate in real time. Think of it as a team-first research workspace with citations, connectors, and agentic workflows. I’m looking for contributors. If you’re into AI agents, RAG, search, browser extensions, or open-source research tooling, would love your help. Current features Self-hostable (Docker) 25+ external connectors (search engines, Drive, Slack, Teams, Jira, Notion, GitHub, Discord, and more) Realtime Group Chats Video generation Editable presentation generation Deep agent architecture (planning + subagents + filesystem access) Supports 100+ LLMs and 6000+ embedding models (via OpenAI-compatible APIs + LiteLLM) 50+ file formats (including Docling/local parsing options) Podcast generation (multiple TTS providers) Cross-browser extension to save dynamic/authenticated web pages RBAC roles for teams Upcoming features Desktop & Mobile app submitted by /u/Uiqueblhats [link] [comments]
View originalYes, Notion AI offers a free tier. Pricing found: $0, $10, $20, $10, $0
Key features include: Custom Agents, Enterprise Search, AI Meeting Notes, Flexible workflows, Get started on Notion, Notion Mail, Notion Calendar.
Notion AI is commonly used for: Let Notion AI handle the busywork..
Based on 23 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.