零一万物致力于成为一家由技术愿景驱动、拥有卓越中国工程底蕴的创新企业,推动以基座大模型为突破的AI 2.0掀起技术、平台到应用多个层面的革命。
01.AI Updates WorldWise Enterprise LLM Platform 2.5, Marking 2026 as the Critical Year of Multi-Agent System Enterprise Deployment. Dr. Kai-Fu Lee’s approach to tackle the enterprise AI market: I am personally working with CEOs. 01.AI Co-founder Jeffrey Ma Presents at Yabuli China Entrepreneurs Forum. Kazakhstan President Tokayev meets with Kai-Fu Lee, CEO of 01.AI. Dr. Kai-Fu Lee has been reappointed to the HongKong Chief Executive's Council of Advisers. 01.AI Co-Founder Sandy Shen: GenAI’s Needs “CEO-Level” Executive Strategy to Cross Six Chasms to the Next Phase. 01.AI's LLM One-box Solution Machine Receives Top-Level Certification from CAICT. realizing 01.AI's aspiration of "Human + AI" that technology empowers and amplifies our humankind. We hope to be able to contribute to this new AI 2.0 ecosystem. The team behind 01.AI firmly believes that the new AI 2.0 driven by foundation model breakthrough is revolutionizing technology, platforms, and applications at all levels. We predict that AI 2.0 will create a platform opportunity ten times larger than the mobile internet, rewriting all software and user interfaces. This trend will give rise to the next wave of AI-first applications and AI-empowered business models, fostering AI 2.0 innovations over time.
Mentions (30d)
0
Reviews
0
Platforms
2
GitHub Stars
7,839
486 forks
Features
Industry
information technology & services
Employees
76
Funding Stage
Series A
Total Funding
$200.0M
1,205
GitHub followers
12
GitHub repos
7,839
GitHub stars
20
npm packages
40
HuggingFace models
seCall – Search your AI agent chat history in Obsidian (CJK-aware BM25)
I've been spending about 80% of my dev time talking to terminal agents (Claude Code, Codex, Gemini CLI). At some point I thought — I should be able to search this stuff. Found a similar project a while back, but BM25 doesn't work well for Korean (or Japanese/Chinese), so I gave up. Recently had some Claude credits left over, so I went ahead and built it. What it does: ingests your terminal agent session logs, indexes them with hybrid BM25 + vector search (Korean morpheme analysis via Lindera), and stores everything as an Obsidian-compatible markdown vault. You can also register it as an MCP server in Claude Code and search old conversations directly from your agent. Also supports Claude.ai export (.zip) now. Built it as a test project for tunaFlow, my multi-agent orchestration app (not public yet). Honestly it's not that fancy — mostly just a Korean-friendly version of what qmd does, plus the wiki layer from Karpathy's LLM Wiki gist. Open source, AGPL-3.0. Stars and forks welcome 🐟 https://github.com/hang-in/seCall submitted by /u/d9ng-hang-in2 [link] [comments]
View originalCapybara V4 Log Appeared On Claude App
submitted by /u/Shoddy-Department630 [link] [comments]
View originalI built a CLI that installs MCP, skills, prompts, commands and sub-agents into any AI tool (Cursor, Claude Code, Windsurf, etc.)
Install Sub-agents, Skills, MCP Servers, Slash Commands and Prompts Across AI Tools with agent-add agent-add lets you install virtually every type of AI capability across tools — so you can focus on what to install and where, without worrying about each tool's config file format. https://preview.redd.it/kemovi39qitg1.jpg?width=1964&format=pjpg&auto=webp&s=b994b81f343ee01afdf23392e13e0d472c71a47d It's especially useful when: You're an AI capability developer shipping MCP servers, slash commands, sub-agents, or skills Your team uses multiple AI coding tools side by side You can also use agent-add simply to configure your own AI coding tool — no need to dig into its config file format. Getting Started agent-add runs directly via npx — no install required: npx -y agent-add --skill 'https://github.com/anthropics/skills.git#skills/pdf' agent-add requires Node.js. Make sure it's installed on your machine. Here's a more complete example: npx -y agent-add \ --mcp '{"playwright":{"command":"npx","args":["-y","@playwright/mcp"]}}' \ --mcp 'https://github.com/modelcontextprotocol/servers.git#.mcp.json' \ --skill 'https://github.com/anthropics/skills.git#skills/pdf' \ --prompt $'# Code Review Rules\n\nAlways review for security issues first.' \ --command 'https://github.com/wshobson/commands.git#tools/security-scan.md' \ --sub-agent 'https://github.com/VoltAgent/awesome-claude-code-subagents.git#categories/01-core-development/backend-developer.md' For full usage details, check the project README, or just run: npx -y agent-add --help Project & Supported Tools The source code is hosted on GitHub: https://github.com/pea3nut/agent-add Here's the current support matrix: AI Tool MCP Prompt Skill Command Sub-agent Cursor ✅ ✅ ✅ ✅ ✅ Claude Code ✅ ✅ ✅ ✅ ✅ Trae ✅ ✅ ✅ ❌ ❌ Qwen Code ✅ ✅ ✅ ✅ ✅ GitHub Copilot ✅ ✅ ✅ ✅ ✅ Codex CLI ✅ ✅ ✅ ❌ ✅ Windsurf ✅ ✅ ✅ ✅ ❌ Gemini CLI ✅ ✅ ✅ ✅ ✅ Kimi Code ✅ ✅ ✅ ❌ ❌ Augment ✅ ✅ ✅ ✅ ✅ Roo Code ✅ ✅ ✅ ✅ ❌ Kiro CLI ✅ ✅ ✅ ❌ ✅ Tabnine CLI ✅ ✅ ❌ ✅ ❌ Kilo Code ✅ ✅ ✅ ✅ ✅ opencode ✅ ✅ ✅ ✅ ✅ OpenClaw ❌ ✅ ✅ ❌ ❌ Mistral Vibe ✅ ✅ ✅ ❌ ❌ Claude Desktop ✅ ❌ ❌ ❌ ❌ submitted by /u/pea3nut [link] [comments]
View originalClaude Code Source Code Leak just before April Fool. A coincidence?
https://techcrunch.com/2026/04/01/anthropic-took-down-thousands-of-github-repos-trying-to-yank-its-leaked-source-code-a-move-the-company-says-was-an-accident/ https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai Repo link: https://github.com/Thegoldenwitwork/claude-code-source submitted by /u/Jonathan-MFR-Chris [link] [comments]
View originalI am fully blind, and this is why Claude is changing my life.
So, I want to tell you about my experience with Claude. Firstly, I am fully blind. i am telling you this because this is the main reason why Ai has such an incredible impact on my life. I have been a tech user since I were a small child. Building small apps and programs, because back then, accessibility was, as today, hardly existing in the sense that you would expect from modern life. Granted, today is much better, but I had to learn a little bit of coding to try and help myself. Even though I am blind, I have been blessed with an abled body and mind, and as such, technology has always interested me. My professional life has been as an IT consultant for blind and visually impaired people, as well as a consultant on digital accessibility for large organizations. Therefore, when Open AI took the world with storm, I were naturally among the many first people to check this out. And well, what a game changer. yes, it was nice making chatgpt make funny texts and such, but I knew that it would ,and could really help me. Fast forward, suddenly it was able to recognize images. Say what? Now I, as a blind person, could have an image analyzed and described to me in great detail. More than often better than my sighted friends could. Time flew by, and suddenly, this new AI called Claude came to the public. I experienced much better coding, better responses, and over all a better interaction. it took a while for Claude to catch up to Chatgpt in terms of image descriptions, but a few years later, I had a tool in my hands that were powerful. Not just like: "Wow, cool, I can have it help me write my mails", "Wow cool, it can help me debug my code". no, this was more like: "Holy hell. I can have it describe images to me", "Incredible! It can create a slide show for me for my presentation at Microsoft". The best of it all, it makes my life easyer, better, more fulfilled. I run a small consultancy business. I can build small apps and programs that really help me. An example, a price calculator. before: -Customer sends a request for a 3d print. -I have to open the file in a completely inaccessible slicer to get the different values I need to calculate an offer. -Then, I had to type the values into excel. -Then read the results from Excel. -Then try to create an offer that looked okay and made sense using word. -Then open outlook, write a mail, attach the offer, and send it. This is something that took up to 30 minutes to do. Then, I created a small app using claude code. With this app, I can import the 3d file into the app, and it will automatically do all of the above for me, literally. This takes about 3 to 5 minutes. Time management can also be a challenge. using Siri that works most of the time, but once it becomes complicated, you have to add a location, you have to write some notes, then it becomes time consuming. I am now building an app on my iphone that can automate all of this for me. From image description to document creation, coding and app development, using Claude code along with agents, Claude is giving me every day independence like I could only dream about. For me, AI really has the potential to give me a place in this world on the same level as sighted people and non-disabled people. Hell, I have even been recognized in publications such as Hackster, 3D printing industry, and Hackaday for my, what they call, innovative 3D design method. Quite frankly, I wish that AI tools such as Claude, ChatGPT and others would become free of charge for blind people. Not because we are entitled to it, but because it is a substitute for sight. Anyway, for those of you who got this far through my thoughts, thank you for reading along, and I hope you use AI productively. submitted by /u/Mrblindguardian [link] [comments]
View originalClaude Status Update : Unavailable connectors in Claude.ai desktop applications on 2026-03-31T21:01:33.000Z
This is an automatic post triggered within 2 minutes of an official Claude system status update. Incident: Unavailable connectors in Claude.ai desktop applications Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/z5scppyhphjk Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1pygdbz/usage_limits_bugs_and_performance_discussion/ submitted by /u/ClaudeAI-mod-bot [link] [comments]
View originalThe job search grind was killing me so I built AI agents to do it
Dashboard Preview I'm a CPA, not a developer. I'm looking for a job at the intersection of AI and finance, and the process of searching for openings, doing company research, and tailoring my CV is such a massive time sink. So I automated it. 1-minute demo: https://youtu.be/L-8e5EkNv1w Repo: https://github.com/muggl3mind/career-manager This is NOT a resume auto-submitter or some kind of precursor to a SaaS product. I built it for myself, but it's saving me so much time I thought others might get some value out of it. The whole thing was built with Claude Code. You paste 1 prompt into Claude Code and it asks for your resume, then kicks off a bunch of subagents to do the research, and drops you into a dashboard for review. It can: - Discover and score companies against your job niche - Generate deep company research (financials, leadership, culture signals) - Tailor your CV for a specific role - Track applications and flag follow-ups - Surface direct points of contact at the company Happy to answer questions about the build or how the subagent orchestration works. submitted by /u/Novel-Associate-9799 [link] [comments]
View originalI built a full-stack serverless AI agent platform on AWS in 29 hours using Claude Code — here's the entire journey as a tutorial
TL;DR: Built a complete AWS serverless platform that runs AI agents for ~$0.01/month — entirely through conversational prompts to Claude Code over 5 weeks. Documented every prompt, failure, and fix as a 7-chapter vibe coding tutorial. GitHub repo. What I built Serverless OpenClaw runs the OpenClaw AI agent on-demand on AWS — with a React web chat UI and Telegram bot. The entire infrastructure deploys with a single cdk deploy. The twist: every line of code was written through Claude Code conversations. No manual coding — just prompts, reviews, and course corrections. The numbers Metric Value Development time ~29 hours across 5 weeks Total AWS cost ~$0.25 during development Monthly running cost ~$0.01 (Lambda) Unit tests 233 E2E tests 35 CDK stacks 8 TypeScript packages 6 (monorepo) Cold start 1.35s (Lambda), 0.12s warm The cost journey This was the most fun part. Claude Code helped me eliminate every expensive AWS component one by one: What we eliminated Savings NAT Gateway -$32/month ALB (Application Load Balancer) -$18/month Fargate always-on -$15/month Interface VPC Endpoints -$7/month each Provisioned DynamoDB Variable Result: From a typical ~$70+/month serverless setup down to $0.01/month on Lambda with zero idle costs. Fargate Spot is available as a fallback for long-running tasks. How Claude Code was used This wasn't "generate a function" — it was full architecture sessions: Architecture design: "Design a serverless platform that costs under $1/month" → Claude Code produced the PRD, CDK stacks, network design TDD workflow: Claude Code wrote tests first, then implementation. 233 tests before a single deploy Debugging sessions: Docker build failures, cold start optimization (68s → 1.35s), WebSocket auth issues — all solved conversationally Phase 2 migration: Moved from Fargate to Lambda Container Image mid-project. Claude Code handled the entire migration including S3 session persistence and smart routing The prompts were originally in Korean, and Claude Code handled bilingual development seamlessly. Vibe Coding Tutorial (7 chapters) I reconstructed the entire journey from Claude Code conversation logs into a step-by-step tutorial: # Chapter Time Key Topics 1 The $1/Month Challenge ~2h PRD, architecture design, cost analysis 2 MVP in a Weekend ~8h 10-step Phase 1, CDK stacks, TDD 3 Deployment Reality Check ~4h Docker, secrets, auth, first real deploy 4 The Cold Start Battle ~6h Docker optimization, CPU tuning, pre-warming 5 Lambda Migration ~4h Phase 2, embedded agent, S3 sessions 6 Smart Routing ~3h Lambda/Fargate hybrid, cold start preview 7 Release Automation ~2h Skills, parallel review, GitHub releases Each chapter includes: the actual prompt given → what Claude Code did → what broke → how we fixed it → lessons learned → reproducible commands. Start the tutorial here → Tech stack TypeScript monorepo (6 packages) on AWS: CDK for IaC, API Gateway (WebSocket + REST), Lambda + Fargate Spot for compute, DynamoDB, S3, Cognito auth, CloudFront + React SPA, Telegram Bot API. Multi-LLM support via Anthropic API and Amazon Bedrock. Patterns you can steal API Gateway instead of ALB — Saves $18+/month. WebSocket + REST on API Gateway with Lambda handlers Public subnet Fargate (no NAT) — $0 networking cost. Security via 6-layer defense (SG + Bearer token + TLS + localhost + non-root + SSM) Lambda Container Image for agents — Zero idle cost, 1.35s cold start. S3 session persistence for context continuity Smart routing — Lambda for quick tasks, Fargate for heavy work, automatic fallback between them Cold start message queuing — Messages during container startup stored in DynamoDB, consumed when ready (5-min TTL) The repo is MIT licensed and PRs are welcome. Happy to answer questions about any of the architecture decisions, cost optimization tricks, or how to structure long Claude Code sessions for infrastructure projects. GitHub | Tutorial submitted by /u/Consistent-Milk-6643 [link] [comments]
View originalI don't use AI to write my reports. I built a system that remembers how to do it.
So I wrote a whole Medium post about this but like…5 claps lol after three days. Figured I'd share a shorter version here since I already put in the effort. Yes, I still write weekly reports in 2026. Very corporate, very dinosaur energy. But here's the thing: I don't mind writing reports (sort of like it as a signal of week end). What I mind is re-explaining the same context to ChatGPT every single week. You know the drill. Friday rolls around, you paste your notes into ChatGPT, and it goes: "Sure! What format would you like?" Didn't I tell you last week? ? So you dig up last week's report, copy-paste it as a reference, and spend 20 minutes babysitting the output because it forgot Feature X was supposed to ship last Tuesday. I did this for months. Then I realized why am I the one remembering things for an AI? Here's what I changed. I stopped relying on ChatGPT's memory and built a file-based system instead. I'm using Halomate, though the principles work with any AI tool that supports persistent workspaces. I actually tried Poe first but their memory resets between sessions so never worked out. Ok now all my past reports live as markdown files like below. My product roadmap is a file. Data analysis is a file. Everything's organized, not buried in some chat from three weeks ago. The Weekly Reports Project workspace: all files live in one shared space. I have an AI assistant I call Axel. His job on communication side, including writing reports. When I need a new one, I paste my messy notes and ask Axel to clean the notes and generate the weekly report. He reads last week's report from the actual file, not from fuzzy memory. He checks the roadmap file. He pulls in data analysis. Then writes the new report. Takes a few minutes now. The thing is, files don't forget but conversations do. ChatGPT's memory is fuzzy. It kind of remembers you like bullet points, thinks you mentioned something about a product launch but can't remember when. With files, there's no ambiguity. If I wrote "Feature X ships Tuesday" in Week_3_Report.md, Axel reads it and knows. If this week's notes don't mention Feature X, he flags it: "Last week we committed to Feature X, no update?" I also keep separate AI assistants for different jobs. Axel writes reports. Query handles data analysis. Leo maintains the product roadmap. Why separate? I want all my assistants to be specialist, and later on if I need them to other projects, they already know how. ah and also, save credits! When I need a quick chart, I don't want to load Axel's 52 weeks of report context. Query does the chart, saves it as a file, Axel references it later. Also, I can swap models without losing context. Most weeks I use Claude for Axel. Sometimes I want a second opinion, so I regenerate with GPT or Gemini. But Axel's personality or memory don't reset. Only the model underneath changes. Remember when OpenAI deprecated GPT-4o and people felt actual grief? I also migrated my old 4o persona here and built a new mate using that persona and memory. What I'm thinking is that if a model shuts down tomorrow, I switch engines and keep going. Now my actual Friday workflow: all week I keep rough notes. Friday I paste the mess and type: "Clean the notes and generate the weekly report." Axel reads last week's report, scans my notes, checks product roadmap and new data analysis, writes a new report for this week. Done. And maybe later I need a quarterly report? Axel will just read all 12 weekly reports and write a summary, and generate a decent report if needed. Something like this (all mock data). https://preview.redd.it/bv4w7ff64xqg1.png?width=720&format=png&auto=webp&s=732f82e8d029daead86c7d2e5905a7cf9654c421 I don't know if this is useful to anyone else. Maybe everyone's moved past weekly reports. But this mechanism could be applied to anything that you need to build over time. Anyway. If you're tired of re-explaining context every week, maybe this helps. submitted by /u/AIWanderer_AD [link] [comments]
View originalClaude Sonnet 4.6 placed 2nd of 9 models, best judge calibration, but lost to GPT-5.4 on decisiveness. Data from MiniMax M2.7 release-day eval
M2.7 claims it can iteratively improve its own output. My evals are single-turn. If you have ideas for multi-turn tasks that would test this against Claude, drop them below or in the Discord (https://discord.gg/QvVTPCxH). Running the best suggestions next. Serving disclosure: All models ran through OpenRouter API. Quantization/inference not controlled by evaluator. I run a blind peer evaluation system (The Multivac, open-source) that tests AI models on hard tasks. MiniMax released M2.7 today with self-improvement claims, so I ran 9 models through 13 evaluations. Claude Sonnet 4.6 was both a contestant and a judge. Claude-specific findings: Claude Sonnet 4.6 averaged 8.65 across all 13 evals, placing 2nd behind GPT-5.4 (9.26). The 0.61-point gap was consistent. Claude placed top 2 in 10 of 13 evals and won Simpson's Paradox outright at 9.71. As a judge, Claude was the most balanced. It averaged 7.46 in score given, which is moderate-strict. GPT-5.4 was harsher at 5.80. MiniMax models averaged 9.0+. Claude's individual judge scores tracked the final averaged rankings more closely than any other judge except GPT-5.4. If I had to pick two judges for every future eval, it would be Claude and GPT. Where Claude underperformed: On the investment theory eval (Decision Under Deep Uncertainty), Claude placed 5th at 7.01. The issue: Claude explained all three investment options thoroughly but did not commit to a recommendation. MiniMax M2.5 placed 2nd at 9.03 because it gave a direct answer ("Investment B for $10K savings, Investment A for $10M") then justified it. Claude over-indexed on balanced analysis when the question asked for a verdict. This pattern appeared on at least 2 other evals. Claude consistently explains rather than recommends. For tasks where "which one should I pick?" is the real question, Claude's thoroughness becomes a liability. FEW THINGS I WANT TO KNOW MORE ABOUT: Have you noticed Claude being stronger on explanation than on recommendation? Is this consistent with your experience? Claude as judge was moderate-strict (7.46 avg). Do you find that calibration useful, or do you prefer harsher evaluation? The 0.61-point gap to GPT-5.4 showed up on reasoning tasks specifically. On code tasks the gap was smaller. Does this match what you see? submitted by /u/Silver_Raspberry_811 [link] [comments]
View originalOpenAI is throwing everything into building a fully automated researcher
OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. Read the full story for an exclusive conversation with OpenAI’s chief scientist Jakub Pachocki about his firm's new grand challenge and the future of AI. submitted by /u/techreview [link] [comments]
View originalChange in project file context limit?
Previously I uploaded a 22k lines text file in my project, which used up ~40% of the project file limit, and it works well and accurately by referring back to the project file. From this week however, the project suddenly become not workable, and the system say I have exceed the context size limit? How to resolve this? submitted by /u/Swimming_Window_7957 [link] [comments]
View originalNeuralink Co-Founder Max Hodak: The Future Of Brain-Computer Interfaces | Y Combinator Podcast
Synopsis: YC alum Max Hodak is the co-founder of Neuralink and founder of Science, a company building brain-computer interfaces that can restore sight. Science has developed a tiny retinal implant that stimulates cells in the eye to help blind patients see again. More than 40 patients have already received the treatment in clinical trials, including one who recently read a full novel for the first time in over a decade. In this episode of How to Build the Future, Max joined Garry to discuss how BCIs work, what it takes to engineer the brain, and why brain-computer interfaces may become one of the most important technologies of the next decade. Timestamps: [00:00:31] Welcome Max Hodak [00:00:54] Restoring Sight with the Prima Implant [00:01:57] What is a Brain-Computer Interface (BCI)? [00:05:51] Neuroplasticity and BCI [00:09:31] The Qualia of BCI [00:13:10] The Next 5 to 10 Years [00:24:29] Max's Background in Tech and Biology [00:29:03] Biohybrid Neural Interfaces [00:33:04] Lessons from Neuralink [00:34:31] The Unification of AI and Neuroscience [00:39:42] The Vessel Program (Organ Perfusion) [00:44:25] The Origins of Neuralink [00:47:20] Advice for Founders [00:51:32] The 2035 Event Horizon Link to the Full Interview: https://www.youtube.com/watch?v=5gspRJVp9dI Spotify PocketCast Apple Podcasts submitted by /u/44th--Hokage [link] [comments]
View originalis it just me or are they using chat gpt to fix chat gpt?
Its giving me those Codex "im going to make a second pass to ensure there is no regression" vibes submitted by /u/DannyS091 [link] [comments]
View originalRepository Audit Available
Deep analysis of 01-ai/Yi — architecture, costs, security, dependencies & more
01.ai uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Our vision: Make AGI Accessible and Beneficial to Everyone..
01.ai has a public GitHub repository with 7,839 stars.
Based on 19 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.