Recall.ai provides an API to get recordings, transcripts and metadata from video conferencing platforms like Zoom, Google Meet Microsoft Teams, and mo
Record video conferences through a bot joining the call. Best when explicit recording consent is needed, or for building AI agents. Record video conferences and in-person meetings through a desktop app, without a bot in the call. Best for a stealthier recording experience. “Legal tech is high stakes. Working with Recall.ai, we had scalable Zoom meeting support ready in under two months, rather than six." “Recall.ai allows us to build meeting recording features without worrying about infrastructure. It has helped us move faster than we could have with an in-house build." "We’re building an AI Scribe for our doctors, and Recall.ai was the first piece of infrastructure I pushed to bring in. I’d seen how seamlessly it handled meeting data at my last company, which made choosing it again an easy call." "Recall.ai's Meeting Bot API saved us from months of pain. One integration, extremely reliable, and we launched our meeting bot feature in days." “Once we started using Recall.ai's Desktop Recording SDK to power Mem’s meeting recording experience, the painful edge cases that we had to chase on the support side went to zero." “Recall.ai allows us to operate reliable, enterprise-scale meeting transcriptions without worrying about infrastructure or security."
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Industry
information technology & services
Employees
34
Funding Stage
Series B
Total Funding
$50.8M
Pricing found: $38
Persistent memory changes how people interact with AI — here's what I'm observing
I run a small AI companion platform and wanted to share some interesting behavioral data from users who've been using persistent cross-session memory for 2-3 months now. Some patterns I didn't expect: "Deep single-thread" users dominate. 56% of our most active users put 70%+ of their messages into a single conversation thread. They're not creating multiple characters or scenarios — they're deepening one relationship. This totally contradicts the assumption that users are "scenario hoppers." Memory recall triggers emotional responses. When the AI naturally brings up something from weeks ago — "how did that job interview go?" or referencing a pet's name without being prompted — users consistently react with surprise and increased engagement. It's a retention mechanic that doesn't feel like a retention mechanic. The "uncanny valley" of memory exists. If the AI remembers too precisely (exact dates, verbatim quotes), it feels surveillance-like. If it remembers too loosely, it feels like it didn't really listen. The sweet spot is what I'd call "emotionally accurate but detail-fuzzy" — like how a real friend remembers. Day-7 retention correlates with memory depth. Users who trigger 5+ memory retrievals in their first week retain at nearly 4x the rate of those who don't. The memory system IS the product, not a feature. Sample size is small (~800 users) so take this with appropriate skepticism. But it's consistent enough that I think persistent memory is going to be table stakes for AI companions within a year. What's your experience with memory in AI conversations? Anyone else building in this space? submitted by /u/DistributionMean257 [link] [comments]
View originalI tested what happens when you give an AI coding agent access to 2 million research papers. It found techniques it couldn't have known about.
Quick experiment I ran. Took two identical AI coding agents (Claude Code), gave them the same task — optimize a small language model. One agent worked from its built-in knowledge. The other had access to a search engine over 2M+ computer science research papers. Agent without papers: did what you'd expect. Tried well-known optimization techniques. Improved the model by 3.67%. Agent with papers: searched the research literature before each attempt. Found 520 relevant papers, tried 25 techniques from them — including one from a paper published in February 2025, months after the AI's training cutoff. It literally couldn't have known about this technique without paper access. Improved the model by 4.05% — 3.2% better. The interesting moment: both agents tried the same idea (halving the batch size). The one without papers got it wrong — missed a crucial adjustment and the whole thing failed. The one with papers found a rule from a 2022 paper explaining exactly how to do it, got it right on the first try. Not every idea from papers worked. But the ones that did were impossible to reach without access to the research. AI models have a knowledge cutoff — they can't see anything published after their training. And even for older work, they don't always recall the right technique at the right time. Giving them access to searchable literature seems to meaningfully close that gap. I built the paper search tool (Paper Lantern) as a free MCP server for AI coding agents: https://code.paperlantern.ai Full experiment writeup: https://www.paperlantern.ai/blog/auto-research-case-study submitted by /u/kalpitdixit [link] [comments]
View originalBuilt an AI memory system based on cognitive science instead of vector databases
Most AI agent memory is just vector DB + semantic search. Store everything, retrieve by similarity. It works, but it doesn't scale well over time. The noise floor keeps rising and recall quality degrades. I took a different approach and built memory using actual cognitive science models. ACT-R activation decay, Hebbian learning, Ebbinghaus forgetting curves. The system actively forgets stale information and reinforces frequently-used memories, like how human memory works. After 30 days in production: 3,846 memories, 230K+ recalls, $0 inference cost (pure Python, no embeddings required). The biggest surprise was how much forgetting improved recall quality. Agents with active decay consistently retrieved more relevant memories than flat-store baselines. And I am working on multi-agent shared memory (namespace isolation + ACL) and an emotional feedback bus. Curious what approaches others are using for long-running agent memory. submitted by /u/Ni2021 [link] [comments]
View originalOpen source persistent memory for AI agents — local embeddings, no external APIs
Codeberg: Engram Live demo: https://demo.engram.lol/gui (password: demo) Built a memory server that gives AI agents long-term memory across sessions. Store what they learn, search by meaning, recall relevant context automatically. - Embeddings run locally (MiniLM-L6) no OpenAI key needed - Single SQLite file no vector database required - Auto-linking builds a knowledge graph between memories - Versioning, deduplication, auto-forget - Four-layer recall: static facts + semantic + importance + recency - WebGL graph visualization built in - TypeScript and Python SDKs One file, docker compose up, done. MIT licensed. edit: I cant sleep with this thing and haven't slept much for awhile because of it, went from ~2,300 lines to 6,200+. Here's what's new: - **FSRS-6 spaced repetition** replaced the old flat 30-day decay. Memories now decay on a power-law curve (same algorithm behind modern Anki). Every access counts as an implicit review, so frequently used memories stick around and unused ones fade naturally - **Dual-strength memory model** each memory tracks storage strength (deep encoding, never decays) and retrieval strength (current accessibility, decays over time). Based on Bjork & Bjork 1992. Makes recall scoring way more realistic - **Native vector search via libsql** moved from SQLite to libsql. Embeddings stored as FLOAT32(384) with ANN indexing. Search is O(log n) now instead of brute-force cosine similarity over everything - **Conversation storage + search** store full agent chat logs, search across messages, link to memory episodes - **Episodic memory** group memories into sessions/episodes Everything from before is still there local embeddings, auto-linking, versioning, dedup, four-layer recall, contradiction detection, time-travel queries, reflections, graph viz, multi-tenant, TypeScript/Python SDKs, MCP server. Still one file, still `docker compose up`, still MIT. submitted by /u/Shattered_Persona [link] [comments]
View originalYes, Recall.ai offers a free tier. Pricing found: $38