Based on the provided social mentions, I cannot find specific user reviews or feedback about Microsoft Copilot for Teams. The social media posts are primarily promotional content from Microsoft's official accounts showcasing various Copilot integrations across different applications (Excel, Office 365) and use cases, rather than genuine user testimonials or complaints. The mentions focus on demonstrating Copilot's capabilities in diverse scenarios like pharmacy management in Kenya and Premier League data analysis, but don't reveal user sentiment about pricing, performance issues, or overall satisfaction with the Teams-specific version. To provide an accurate summary of user opinions about Copilot for Teams, actual user reviews and feedback would be needed.
Mentions (30d)
22
7 this week
Reviews
0
Platforms
3
Sentiment
0%
0 positive
Based on the provided social mentions, I cannot find specific user reviews or feedback about Microsoft Copilot for Teams. The social media posts are primarily promotional content from Microsoft's official accounts showcasing various Copilot integrations across different applications (Excel, Office 365) and use cases, rather than genuine user testimonials or complaints. The mentions focus on demonstrating Copilot's capabilities in diverse scenarios like pharmacy management in Kenya and Premier League data analysis, but don't reveal user sentiment about pricing, performance issues, or overall satisfaction with the Teams-specific version. To provide an accurate summary of user opinions about Copilot for Teams, actual user reviews and feedback would be needed.
Industry
information technology & services
Employees
228,000
https://t.co/hPczAuiL8J
https://t.co/hPczAuiL8J
View originalNew Yorker published a major investigation into Sam Altman and OpenAI today — based on never-before-disclosed internal memos and 100+ interviews
Ronan Farrow spent 18 months reporting this piece, drawing on internal documents that haven’t previously been made public — including ~70 pages of memos compiled by Ilya Sutskever and 200+ pages of private notes kept by Dario Amodei. The piece covers a lot of ground. Some of what’s in it: ∙ The specific concerns that led the board to fire Altman in 2023. Sutskever’s memos allege a pattern of deception about safety protocols. One begins with a list: “Sam exhibits a consistent pattern of . . .” The first item is “Lying.” ∙ The superalignment team was publicly promised 20% of compute. People who worked on the team say actual resources were 1-2%, on the oldest hardware. The team was dissolved without completing its mission. When reporters asked to interview OpenAI researchers working on existential safety, a company representative replied: “What do you mean by ‘existential safety’? That’s not, like, a thing.” ∙ After Altman’s reinstatement, the firm behind the Enron and WorldCom investigations was hired to review the allegations. No written report was ever produced. Findings were limited to oral briefings. ∙ In a tense call after his firing, the board pressed Altman to acknowledge a pattern of deception. “I can’t change my personality,” he said. A board member’s interpretation: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’” ∙ In OpenAI’s early years, executives discussed playing world powers including China and Russia against each other in a bidding war for AI. The company’s own policy adviser: “We’re talking about potentially the most destructive technology ever invented — what if we sold it to Putin?” The plan was dropped after employees threatened to quit. ∙ When Anthropic refused a Pentagon ultimatum to drop its prohibitions on autonomous weapons, Altman publicly claimed solidarity. But he’d been negotiating with the Pentagon for at least two days. That Friday, OpenAI announced a $50B deal integrating its models into military infrastructure. ∙ Multiple senior Microsoft executives described the relationship as “fraught.” One: “He has misrepresented, distorted, renegotiated, reneged on agreements.” submitted by /u/Altruistic-Top9919 [link] [comments]
View originalI built a CLI that installs MCP, skills, prompts, commands and sub-agents into any AI tool (Cursor, Claude Code, Windsurf, etc.)
Install Sub-agents, Skills, MCP Servers, Slash Commands and Prompts Across AI Tools with agent-add agent-add lets you install virtually every type of AI capability across tools — so you can focus on what to install and where, without worrying about each tool's config file format. https://preview.redd.it/kemovi39qitg1.jpg?width=1964&format=pjpg&auto=webp&s=b994b81f343ee01afdf23392e13e0d472c71a47d It's especially useful when: You're an AI capability developer shipping MCP servers, slash commands, sub-agents, or skills Your team uses multiple AI coding tools side by side You can also use agent-add simply to configure your own AI coding tool — no need to dig into its config file format. Getting Started agent-add runs directly via npx — no install required: npx -y agent-add --skill 'https://github.com/anthropics/skills.git#skills/pdf' agent-add requires Node.js. Make sure it's installed on your machine. Here's a more complete example: npx -y agent-add \ --mcp '{"playwright":{"command":"npx","args":["-y","@playwright/mcp"]}}' \ --mcp 'https://github.com/modelcontextprotocol/servers.git#.mcp.json' \ --skill 'https://github.com/anthropics/skills.git#skills/pdf' \ --prompt $'# Code Review Rules\n\nAlways review for security issues first.' \ --command 'https://github.com/wshobson/commands.git#tools/security-scan.md' \ --sub-agent 'https://github.com/VoltAgent/awesome-claude-code-subagents.git#categories/01-core-development/backend-developer.md' For full usage details, check the project README, or just run: npx -y agent-add --help Project & Supported Tools The source code is hosted on GitHub: https://github.com/pea3nut/agent-add Here's the current support matrix: AI Tool MCP Prompt Skill Command Sub-agent Cursor ✅ ✅ ✅ ✅ ✅ Claude Code ✅ ✅ ✅ ✅ ✅ Trae ✅ ✅ ✅ ❌ ❌ Qwen Code ✅ ✅ ✅ ✅ ✅ GitHub Copilot ✅ ✅ ✅ ✅ ✅ Codex CLI ✅ ✅ ✅ ❌ ✅ Windsurf ✅ ✅ ✅ ✅ ❌ Gemini CLI ✅ ✅ ✅ ✅ ✅ Kimi Code ✅ ✅ ✅ ❌ ❌ Augment ✅ ✅ ✅ ✅ ✅ Roo Code ✅ ✅ ✅ ✅ ❌ Kiro CLI ✅ ✅ ✅ ❌ ✅ Tabnine CLI ✅ ✅ ❌ ✅ ❌ Kilo Code ✅ ✅ ✅ ✅ ✅ opencode ✅ ✅ ✅ ✅ ✅ OpenClaw ❌ ✅ ✅ ❌ ❌ Mistral Vibe ✅ ✅ ✅ ❌ ❌ Claude Desktop ✅ ❌ ❌ ❌ ❌ submitted by /u/pea3nut [link] [comments]
View originalI open-sourced 31 AI prompts that turn a visiting card into a full credit due diligence — built by a banker using Claude, not by a developer
17+ years in MSME credit underwriting at banks in India. Not a developer. Can't write a single line of code from scratch. Just a domain guy who got tired of watching the same problem repeat. The problem: Credit teams in banks receive a visiting card from the sales team. Then they spend 3-4 weeks collecting 47 documents — balance sheets, stock statements, CMA data, CA certificates, ITRs, property papers. Only after all that, someone discovers the borrower has an NCLT case. Or a cancelled GST. Or three cheque bounce cases. The proposal gets declined after weeks of wasted effort. Or worse — it gets sanctioned because nobody checked. Most of these red flags are publicly discoverable on Day 1. From a visiting card. What I built: 31 prompts across 10 categories that extract maximum intelligence from just 5 inputs off a visiting card — company name, city, GSTIN (India's tax ID), director name, and DIN (director identification number). Categories: entity verification, director/promoter background checks, NCLT/insolvency search, market reputation, GST turnover analysis, credit rating, group entity mapping, shell company detection, sector risk, and a final go/no-go memo. These prompts work across any LLM — ChatGPT, Claude, Gemini, Perplexity, Copilot. No proprietary tool needed. Just copy, paste, investigate. How I built it: I'm not a coder. I built the entire tool — the prompt library, the React app, the constitution-based logic, and the GitHub Pages deployment — through a conversation with Claude (Anthropic's AI). I described the credit workflow, the due diligence dimensions, the nuances of Indian banking regulations, and Claude helped me structure the prompts and build the web interface. A domain expert with 17 years of credit knowledge + an AI that can code = a working product in one sitting. No bootcamp. No developer hired. No framework learned. That's the real story here. Not just the tool — but what's now possible when deep domain expertise meets AI. Single HTML file. No backend. No database. No login. No cost. 👉 Live tool: https://igmuralikrishnan-cmd.github.io/credit-dd-prompt-generator/ 👉 GitHub repo: https://github.com/igmuralikrishnan-cmd/credit-dd-prompt-generator Why I'm sharing here: MSME lending in India is a $300B+ market. 63 million MSMEs. Most are underserved because the credit appraisal process is slow, manual, and document-heavy. If prompts like these can compress the first stage of due diligence from 3 weeks to 30 minutes — that's a meaningful unlock. I'm not building a startup around this (yet). Just putting it out there for the lending ecosystem. Would love feedback on: Do similar prompt-based pre-screening tools exist in other lending markets? Would this concept translate to SME lending in the US/UK/SEA? Any non-developers here who've built domain tools using Claude or other AI? What was your experience? submitted by /u/Infinite-Voice-2896 [link] [comments]
View originalCloud scheduled tasks can't access MCP connectors — anyone find a workaround or solution? Or have any insight on it beyond what I list here?
Scheduled tasks on Claude Code (cloud, via claude.ai/code/scheduled) can't see any MCP connectors when they fire autonomously. Doesn't matter which connector — I've tested with multiple Zoho connectors and Microsoft 365. The agent runs ToolSearch, finds nothing, and tells you the tools need to be connected. They're connected. They work fine in interactive chat. The tell: if you open the failed session and send any message — literally just "try again" — everything works instantly. No config changes. The tools just appear once a human is in the session. This makes scheduled tasks useless for anything that touches an external service. Email summaries, channel monitoring, CRM lookups, posting to chat platforms — none of it works autonomously. Which is the entire point of scheduling. What I've tried (nothing works): - Deleted and recreated the task - Disabled all connectors on the task, saved, re-enabled, saved - Simplified to a minimal test prompt - Switched models - Different prompt content entirely This SEEMS TO BE a known bug with no workaround. Multiple GitHub issues document it across different connectors (Slack, Datadog, Jira, Zoho, Chrome) and across both Desktop and cloud tasks: #35899 — connectors not available until user message warms session #36327 — same, closed as duplicate #32000 — missing auth scope in scheduled sessions #40835 — editing a task silently disables connectors No one has posted a workaround. No Anthropic team member has commented on any of these issues. I filed my own report since the existing ones are mostly from Desktop/Cowork users — I'm on Teams, cloud-only, no Desktop fallback: 👉 https://github.com/anthropics/claude-code/issues/43397 Anyone else dealing with this? Found anything that works? Workaround Found ! Reddit user /u/e_lizzle was able to identify a workaround that worked for me - it'll cost a few extra tokens but if you start the scheduled task prompt by telling it to not do any work but instead to use an agent to do the entire task - everything works fine because the subagent gets mcp tools initialized properly. Then for now I'm telling it to have the subagent report a summary back up to the primary so I can look at its results in the task log. Cost difference is probably negligible and it solves the problem until its formally fixed. submitted by /u/checkwithanthony [link] [comments]
View originalI got tired of guessing if my CLAUDE.md changes actually helped, so I built a linter for it
Anyone else change their CLAUDE.md, push it, and just... hope Claude does better? I built agenteval, a CLI that lints, benchmarks, and scores your AI coding instructions. Think ESLint but for CLAUDE.md, AGENTS.md, copilot-instructions, .cursorrules, and Anthropic skills. Plug it into your CI pipeline and instruction quality becomes a merge gate just like tests. https://i.redd.it/y000punu61tg1.gif What it does: Lint — Dead references, filler phrases, contradictions, token budget overruns, broken links, vague instructions, and skill metadata validation. Harvest — Mines your git history for AI-assisted commits and builds eval benchmarks from real work. Run + Compare — Scores agent performance on tasks; shows exactly what improved when you changed your instructions. CI — Gates PRs on instruction quality regressions. Trends — Tracks scores over time so you can see if your team is getting better. The "Aha!" moment The first time I ran the linter on my own CLAUDE.md, it found 2 dead file references, 3 filler phrases, and a section eating 42% of my token budget. Claude was reading instructions about files that didn't exist anymore. Quick Start Standalone binary, no Bun/Node needed. curl -fsSL https://raw.githubusercontent.com/lukasmetzler/agenteval/main/install.sh | bash agenteval lint Repo: https://github.com/lukasmetzler/agenteval What checks would be useful for your setup? submitted by /u/KrayAUT [link] [comments]
View originalI used Claude to tear apart a ChatGPT-generated business strategy. Here's what it caught and the prompt I reverse-engineered from the whole thing.
A friend of mine is working on his business and sent me a full strategy to hit $1M in revenue — he built the whole thing by going back and forth with ChatGPT. He's not very technical, just had a long conversation until he had a plan. For what it is, ChatGPT did a solid job getting him to a first draft. But I wanted to see what Claude would do with it. So I dropped the full strategy into Claude and asked it to review, critique, and improve it where it saw fit. Claude's assessment: ChatGPT was 85-90% there at a high level. But it found some real issues: - Revenue projections were too optimistic. Claude flagged specific assumptions that didn't hold up - The channel strategy was basically "be everywhere" with no sequencing or prioritization - Pricing model had gaps that would've cost him real money - A few of the "growth levers" were actually just repackaged generic advice For each correction, Claude gave the reasoning — not just "this is wrong" but "here's why this doesn't work and here's what to do instead." Then it rebuilt the strategy with a revised plan and next steps. I sent the improved version back to my friend and he was fired up. But sitting there afterwards I thought — I'm not thinking big enough for my own business either. So I reverse-engineered the whole exchange into a reusable prompt that anyone can use for their own strategic assessment. Here it is: Role: Act as a seasoned strategic business consultant with 20+ years advising founders, executives, and high-growth teams across industries. You specialize in identifying blind spots, unlocking overlooked growth levers, and reframing how leaders think about their business, market position, and long-term trajectory. Action: Conduct a comprehensive strategic assessment of my business or professional situation. Challenge my current thinking, surface hidden opportunities, and provide a bold but grounded action plan that pushes me beyond incremental improvement toward transformative growth. Context: My business/role: [describe your business, title, or professional situation]. Current revenue or stage: [startup, growth, mature, pivoting — include numbers if comfortable]. Industry: [your field]. Biggest current challenge: [what's keeping you stuck or what you're trying to solve]. What I've already tried: [past strategies, pivots, or investments]. Team size: [solo, small team, department, org-wide]. Time horizon: [90-day sprint, 1-year plan, 3-5 year vision]. Risk tolerance: [conservative, moderate, aggressive]. Resources available: [budget range, tools, partnerships, time commitment]. What "thinking bigger" means to me: [scale revenue, expand market, build a team, launch new product, personal brand, exit strategy, etc.]. Expectation: Deliver a strategic assessment that includes: (1) Honest Diagnosis — where the business actually stands vs. where I think it stands, including blind spots, (2) Market Position Audit — how I compare to competitors, what whitespace exists, and where the market is heading, (3) Three Bold Growth Levers — specific, non-obvious opportunities I'm likely underexploiting (not generic advice like "use social media"), (4) The "10x Question" — reframe my biggest challenge as a 10x opportunity and show what that path looks like, (5) 90-Day Momentum Plan — the 3-5 highest-leverage moves I should make in the next quarter, with sequencing, (6) Resource Optimization — how to get more from what I already have before spending more, (7) Risk/Reward Matrix — for each recommendation, what's the upside, downside, and effort level, (8) The One Thing — if I only do ONE thing from this assessment, what should it be and why. Keep the tone direct and strategic — like a $500/hour consultant giving real talk, not motivational fluff. Be specific to my situation, not generic. Why this works well with Claude specifically: The prompt is structured using the RACE framework — Role, Action, Context, Expectation. Claude handles structured (even unstructured) prompts really well because of how it processes context but not all AI's can. I wouldn't trust Copilot for example to do this'. The "[fill in your details]" fields are doing the heavy lifting — they force you to give Claude enough real context to be specific instead of generic. A few things I noticed comparing Claude's output to ChatGPT's on this same prompt: - Claude is more willing to tell you hard truths. ChatGPT tends to validate your existing thinking. Claude will straight up say "your pricing model doesn't make sense because..." - Claude's "10x Question" reframes tend to be more creative — it doesn't just scale up the existing plan, it rethinks the approach - Claude is better at the Risk/Reward matrix because it actually weighs downsides honestly instead of hand-waving them I've been using this for my own business planning (I build apps as a solopreneur) and Claude's outputs have been genuinely useful — especially the blind spots section. It caught things I'd been ignoring. Full disc
View originalHow are people using Claude as a personal assistant (Slack + Outlook + To-Do)? ADHD-friendly setup help 🙏
Hey all, looking for some practical advice / setups from people who’ve actually made this work. Context: I have pretty severe ADHD, so I’m trying to externalise my brain as much as possible I already use Claude (Pro) and ChatGPT (Plus) Claude is connected to Slack, which is great We’re a small company using Microsoft 365 (Outlook, calendar, etc.) What I want to achieve is basically a proper AI personal assistant layer: Core goals: A central to-do list inside Claude that: I can update naturally (“add this”, “remind me”, etc.) It remembers persistently (not just per chat) A daily briefing, e.g.: Unread / unreplied Slack messages (especially ones I’m tagged in) Important Outlook emails I haven’t replied to Today’s calendar + anything I should prep for Things I’ve likely missed Ideally: Claude nudges me on follow-ups Highlights risks (e.g. “you ignored this client for 3 days”) Acts like a second brain rather than just a chatbot Constraints / reality: I only have individual Claude Pro, not Claude Teams I can get admin access to M365, but unlikely to get approval for multiple paid seats Slack integration works, but Outlook / calendar is the missing piece I’m open to tools like Zapier / Make / etc. but want something maintainable Questions: Has anyone actually got Claude working with Outlook + calendar + tasks in a useful way? Is Claude Teams the only real way to unlock M365 integration, or are there workarounds? Should I be using something like Zapier as the “glue” layer? How are people handling persistent memory / to-do lists with Claude? Is this a case where I should flip it and use ChatGPT as the “brain” instead? I’m basically trying to build a reliable ADHD-friendly operating system for work using AI. If you’ve got a real setup (even scrappy), would massively appreciate you sharing 🙏 submitted by /u/zencatface [link] [comments]
View original@Xbox @GearsofWar If you were wondering what to do this June, stop wondering. 👏
@Xbox @GearsofWar If you were wondering what to do this June, stop wondering. 👏
View originalCopilot Cowork, designed for long-running, multi-step work in Microsoft 365, is now available via the Frontier program
submitted by /u/tekz [link] [comments]
View originalI built a shared memory system for Claude Code — your AI finally knows what your teammates learned yesterday (open source)
the most annoying thing about ai coding with a team is that everyone's claude starts from zero every single session. your teammate spent 2 hours debugging a stripe webhook yesterday? claude has no idea. team agreed on async/await everywhere? claude uses .then() anyway. so i built team brain — a claude code plugin that stores team knowledge in a .team-brain/ folder in your repo. you commit it to git, teammates pull, and everyones claude gets the same context automatically. how it works: you record stuff as you work: /team-brain learn stripe webhooks retry 3x with exponential backoff /team-brain decide use REST over GraphQL for public API /team-brain convention always use async/await never .then() it saves each one as a markdown file in .team-brain/ and auto-generates a BRAIN.md thats capped at 180 lines (claude applies instructions at 92% accuracy under 200 lines, drops to 71% above 400 — so it stays in the sweet spot on purpose) on every session start a hook checks if anything changed and loads the team brain automatically. no manual config needed. the cross-tool part is kinda cool — it also generates .cursorrules for cursor users and AGENTS.md for copilot. so your whole team gets the same conventions regardless of what tool theyre using. also has /team-brain onboard which reads everything and generates an onboarding doc. had a new dev join last week and instead of a 2 hour walkthrough he just ran that and was productive in 20 min. everything is just files in git. no servers no cloud no accounts. entries are individual markdown files so they merge cleanly — two people can add knowledge on different branches without conflicts. install: git clone https://github.com/Manavarya09/team-brain.git ~/.claude/plugins/team-brain then just /team-brain init in your project. github: https://github.com/Manavarya09/team-brain also built cost-guardian last week if anyones interested in tracking claude code spending: https://github.com/Manavarya09/cost-guardian how are you all handling shared context with your teams? or is everyone just manually editing CLAUDE.md and hoping for the best? submitted by /u/Cheap_Brother1905 [link] [comments]
View originalI built an MCP server that gives Claude access to Mail, Calendar, Teams & OneDrive on Mac — no tokens, no cloud
I've been building Pilot MCP — a native macOS MCP server that connects Claude Desktop (and Cursor, Windsurf, VS Code) to your local Mac apps. What it does: - Read, search, send and reply to emails from Mail.app (Gmail, Outlook, iCloud, Exchange) - Create, list and delete calendar events across all your accounts - Read Microsoft Teams chats and channels — without Graph API tokens - Full OneDrive file management (read, write, search, move) - Contacts, Notes, Reminders, OmniFocus, iMessage, Finder - Word, Excel, PowerPoint and PDF support What makes it different: - Privacy-first: all data stays on your Mac. Zero cloud processing. - Zero config: one command to install, auto-configures all AI clients. - Teams reads from local IndexedDB — no OAuth, no tokens, no Microsoft API - 82 MCP tools with safety previews (emails show a preview before sending, you confirm first) Install: curl -fsSL https://local-mcp.com/install?ref=reddit | bash Or add to your Claude Desktop config: {"mcpServers":{"local-mcp":{"command":"npx","args":["-y","local-mcp@latest"]}}} Website: https://local-mcp.com?ref=reddit npm: https://npmjs.com/package/local-mcp GitHub: https://github.com/lanchuske/local-mcp-releases Free for the first 500 installs — yours to keep forever. Would love feedback! submitted by /u/Over-Leek-739 [link] [comments]
View original@davidfowl @aspiredotdev Designing Aspire to be agent-first at the CLI layer is a big move. Fantastic work from the team! Bringing developers and agents into the same interface unlocks a whole new way
@davidfowl @aspiredotdev Designing Aspire to be agent-first at the CLI layer is a big move. Fantastic work from the team! Bringing developers and agents into the same interface unlocks a whole new way to build.
View originalI built AI memory features in Oct 2025. Anthropic shipped Auto-memory, MEMORY.md, and Auto-dream in 2026. They won't respond to my prior art notice.
I'm an indie developer. In October 2025, I published Continuity — a VS Code extension that gives AI coding assistants persistent memory across sessions. What Continuity does (since Oct 2025): Stores decisions and context as local markdown/JSON files Automatically captures architectural decisions (AutoDecisionLogger.ts) Analyzes conversations for insights (ConversationAnalyzer.ts) Watches for file changes (ArchitecturalFileWatcher.ts) Injects context at session start Works with Claude, Cursor, Copilot via MCP What Claude Code shipped in 2026: MEMORY.md — local markdown storage Auto-memory — automatically captures context Auto-dream — automatically captures insights while you work Session injection at startup Side-by-side comparison: My Code (Oct 2025)Claude Code (2026)SESSION_NOTES.mdMEMORY.mdAutoDecisionLogger.tsAuto-memoryConversationAnalyzer.tsAuto-dreamArchitecturalFileWatcher.tsFile detectionProjectInstructionsGenerator.tsCLAUDE.md71 service files?80+ MCP toolsBuilt-in756+ decisionsNew feature Timeline: Oct 3, 2025 — First commit (hash: 4713a7bc109e3eb55e0fa4fd35f22012bc291060) Oct 31, 2025 — Published on VS Code Marketplace Dec 2025 — "Session Memory" leaked in Claude Code Jan 2026 — MEMORY.md ships Mar 2026 — Auto-dream added What I did: Dec 20, 2025 — Contacted Anthropic support (ticket #215472360013037) Dec 25, 2025 — Sent formal prior art notice to their legal team Jan 9, 2026 — Sent follow-up requesting acknowledgment Mar 2026 — Tried support chat again What I got back: Nothing. Four attempts. Zero response. I'm not accusing them of copying code. I can't prove they saw my work. But the architectural overlap is significant, and I published four months before they shipped. All I'm asking for is acknowledgment that my communication was received. That's it. Evidence: GitHub: https://github.com/Alienfader/continuity VS Code Marketplace: Search "Continuity" Gist: https://gist.github.com/Alienfader/9140a7311164d37a90f16600a1e4b6f1 Has anyone else dealt with this? What recourse do indie devs actually have? submitted by /u/Alienfader [link] [comments]
View originalBased on 40 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.