Built to make you extraordinarily productive, Cursor is the best way to code with AI.
Based on these social mentions, "Cursor Tab" appears to be mentioned primarily in the context of AI-powered coding tools and IDE integrations. Users seem to view Cursor (which includes Cursor Tab functionality) as a competitive player in the AI coding assistant space, often comparing it favorably to alternatives like Windsurf and Claude Code. The mentions suggest strong developer interest in integrating Cursor with various AI models and extending its capabilities through custom bridges and plugins. However, the provided content lacks specific user feedback about Cursor Tab's performance, pricing, or detailed user experiences, making it difficult to assess overall user sentiment or identify key strengths and complaints about this particular feature.
Mentions (30d)
10
1 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Based on these social mentions, "Cursor Tab" appears to be mentioned primarily in the context of AI-powered coding tools and IDE integrations. Users seem to view Cursor (which includes Cursor Tab functionality) as a competitive player in the AI coding assistant space, often comparing it favorably to alternatives like Windsurf and Claude Code. The mentions suggest strong developer interest in integrating Cursor with various AI models and extending its capabilities through custom bridges and plugins. However, the provided content lacks specific user feedback about Cursor Tab's performance, pricing, or detailed user experiences, making it difficult to assess overall user sentiment or identify key strengths and complaints about this particular feature.
Industry
information technology & services
Employees
560
Funding Stage
Series D
Total Funding
$3.2B
Made VSmux to work with multiple Claude sessions inside VS code easily
Hey there, Wanted to share this extension that I made for VSC and Cursor as an alternative to the fancy shmanzy ADE tools. It brings the agent CLIs, browser, git management, and code all in 1 app. You get the best features from the Codex app, superset and conductor but inside vscode so you don't need to open multiple apps to code and use the agents. I highly recommend the native tabs setting in vs code to switch between different projects easily too. Links: github.com/maddada/vsmux https://marketplace.visualstudio.com/items?itemName=maddada.VSmux And you can just search for vsmux in the extensions panel in any vs code fork and you'll find it. Some features: Shows indicators for running and finished agents. Optionally plays a sound Organize, split, and group sessions Add any cli agent with custom launch commands Manages integrated vsc browser tabs well with ability to add bookmarks per project 1 click to run commands Supports T3 code Agents keep running and reconnect when you close or restart vsc (customizable timeout) Very customizable Lots of hotkeys Names are taken from terminals titles to sidebar I can run 10+ Claude code sessions all organized and grouped with it very easily. Still has a few rough edges but if I get some help and bug reports I'm sure I can make it perfect. I also have a tool called Agent Manager x on my github that shows you the current status of all your running agents in a floating toolbar (just macOS for now but please be my guest if you'd like to fork it for windows or linux) Please let me know how you like it. It's just a passion project for my own use, will stay free and open source forever. submitted by /u/maddada_ [link] [comments]
View originalI made an SSH Terminal for my Claude Code sessions
I was taking a shit a few months ago and got really frustrated that I couldn't access my Claude Code sessions. I also hated not being able to check my sessions when I was out grocery shopping at CostCo or was waiting for my wife to get out of the bathroom on dates. So I made an app. It started as a simple SSH terminal like any other you can download on the Play Store. I started making it more complex and just kept adding features: session persistence, port forwarding and an in-app browser to check your front end on remote servers, AI change tracking, SFTP file directory browser with built-in Code Editor, file previews, Natural Language to Command, AI chatbot, settings based front page animation, audit logging, encrypted vault, the list goes on. My friend had the same problem and wished he could code easily in his hot tub. Therefore, we joined up to make this app as slick as possible. We released it on the Google Play store (currently pending on IOS store) as TerminaLLM. It's currently free to download. It has subscription tiers but I think the free tier is pretty useful. Here's the full breakdown: Free Tier: Terminal & Sessions Full SSH terminal with session persistence (survive disconnects) Requires our fork of a C-library called dtach to be installed on the remote machine (instructions included in-app) 5 saved connection profiles with live health check dots, environment variables, and startup commands Push notifications for stale sessions (could indicate your AI needs your input!) SFTP File Management Browse, upload, download, rename, delete, mkdir on remote servers File preview with syntax highlighting Recursive file search Voice Input Platform speech-to-text On-device Whisper AI transcription compatibility AI Coding Workflow Auto-detects Claude Code, Aider, Cursor, Copilot and their states Auto-detects OAuth/auth URLs from AI tools Media drop — camera/gallery/files → SFTP upload → path into terminal Productivity 50+ command palette with search and favorites Reusable snippets with variable interpolation Visual ANSI prompt color customizer Auto-run commands on connect Themes & Customization All 15 built-in themes Custom theme editor Import iTerm2 and JSON themes Landscape split QWERTY keyboard Security (all free, always) TOTP MFA + Face ID / Touch ID / Fingerprint AES-256-GCM encrypted credential vault with auto-fill Paranoid / Casual / Speedster security presets (speedster is my personal favorite) Security event audit log with JSON export Screenshot/screen recording prevention Platform-Specific (Pending Apple Approval) iOS Live Activities — Lock Screen + Dynamic Island SSH status Plus — AI change detection, SFTP code editor, batch ops, bookmarks, session sharing (creates .cast files), port forwarding, jump host/bastion server, encrypted backup, session recording, unlimited profiles Pro — AI chatbot + natural language to shell command translation For Claude Code specifically, we recommend generating an OAuth token using claude setup-token and setting that in your per-profile environment variables (CLAUDE_CODE_OAUTH_TOKEN) The /login command will be required consistently when using an SSH terminal. When going in headless, /login will be required on your first go which will open a browser tab prompting login on the server machine itself, which is why we recommend the OAuth token approach, especially if you're going to be far away from your machine. I personally use an always-on, wifi-capable raspberry-pi to wake my machine remotely since I don't like keeping my laptop off sleep all day every day. Please feel free to reach out with any questions! There's also the app website here: https://terminallm.app/ submitted by /u/ballesmen [link] [comments]
View originalI built an open-source MCP tool that gives AI a map of your game codebase
Privacy & Links 100% local. Everything runs on your machine. No telemetry, no cloud calls, no accounts, no analytics. If you’re suspicious, scan the entire codebase yourself — honestly there’s nothing to steal, and I really don’t want to go to jail. Apache 2.0 — fully open source and free for commercial use. GitHub: https://github.com/pirua-game/ai_game_base_analysis_cli_mcp_tool Install: pip install gdep (CLI) / npm install -g gdep-mcp (MCP server) Supported engines: Unity (C#) · UE5 (C++) · Axmol/Cocos2d-x (C++) · .NET · Generic C++ The Problem If you’ve tried using Claude Code, Cursor, or Gemini CLI on a game project, you’ve probably seen this: the AI reads your files one at a time, can’t follow .uasset or Blueprint references, and eventually hallucinates a dependency that doesn’t exist. I watched Claude spend 40+ messages trying to figure out which classes my CombatManager actually affected. It was basically reading files alphabetically and guessing. Meanwhile I’m sitting there thinking “I could have just grep’d this faster.” The real pain? Even when the AI finally gives you an answer, you can’t trust it. “CombatCore probably depends on PlayerManager…” — that “probably” cost me an afternoon of debugging Why I Built gdep So I built gdep (Game DEPendency analyzer). It’s a CLI tool & MCP server & web UI that scans your entire UE5/C++ project in under 0.5 seconds and gives your AI assistant a structural map of everything — class dependencies, call flows across C++→Blueprint boundaries, GAS ability chains, animator states, and unused assets. Think of it as giving your AI a reconnaissance drone and a tactical map, instead of making it open doors one at a time. Real-World Comparison: Same Question, Same Project I tested both approaches on the same Lyra-based UE5 project : https://reddit.com/link/1s7miv7/video/qg131o1qq5sg1/player Without gdep (2 min 10 sec): AI launched 2 Explore agents, used 56 tool calls reading files one by one Took 2 minutes 10 seconds Result: generic overview — “45+ C++ files dedicated to GAS”, vague categorization Blueprint analysis: just counted assets by folder (“6 Characters, 5 Game Systems, 13 Tools…”) No confidence rating, no asset coverage metrics https://reddit.com/link/1s7miv7/video/kxrra3mqq5sg1/player With gdep MCP (56 sec): AI made 3 MCP calls — get_project_context → analyze_ue5_gas + analyze_ue5_blueprint_mapping in parallel Took 56 seconds (2.3x faster) Result: structured analysis with confidence headers Confidence: HIGH | 3910/3910 assets scanned (100%) Every ability listed with its role, 35 GA Blueprints + 40 GE Blueprints + 20 AnimBlueprints mapped Tag distribution breakdown: Ability.* (30), GameplayCue.* (24), Gameplay.* (7) Blueprint→C++ parent mapping with K2 override counts per Blueprint Identified project-specific additions (zombie system) vs Lyra base automatically Same AI, same project, same question. The difference is gdep gives the AI structured tools instead of making it grep through files. What It Actually Does Here’s what it answers in seconds: “What breaks if I change this class?” — Full impact analysis with reverse-trace across the project. Every result comes with a confidence rating (HIGH/MEDIUM/LOW) so you know what to trust. “Where is this ability actually called?” — Call flow tracing that crosses C++→Blueprint boundaries (UE5). “Are there assets nobody references?” — Unused asset detection UE5 binary path scanning. “What’s the code smell here?” — 19 engine-specific lint rules. Things like GetComponent in Update(), SpawnActor in Tick(), missing CommitAbility() in GAS abilities. “Give my AI context about the project” — gdep init generates an AGENTS.md file that any MCP-compatible AI reads automatically on startup. It works as: 26 MCP tools for Claude Desktop, Cursor, Windsurf, or any MCP-compatible agent — npm install -g gdep-mcp, add one JSON config, done. 17 CLI commands for terminal use Web UI with 6 interactive tabs — class browser with inheritance chains, animated flow graph visualization, architecture health dashboard, engine-specific explorers (GAS, BehaviorTree, StateTree, Animator, Blueprint mapping), live file watcher, and an AI chat agent that calls tools against your actual code. Measured performance: UE5: 0.46 seconds on a 2,800+ asset project (warm scan) Unity: 0.49 seconds on 900+ classes Peak memory: 28.5 MB What gdep Is NOT I want to be upfront about this: It’s not a magic wand. AI still can’t do everything, even with a full project map. It’s not an engine editor replacement. It gives AI a map and a recon drone — it doesn’t replace your IDE, your debugger, or your brain. It has confidence tiers for a reason. Binary asset scanning (like UE5 .uasset files) is MEDIUM confidence. Source code analysis is HIGH. gdep tells you this on every single result so you know when to double-check. Delegating ALL your work to AI is still not appropriate. gdep helps AI understan
View originalI’m saving 10+ hours a week with Claude, but I stopped "prompting" months ago.
Founders keep trying to automate their lives with complex AI stacks, and I keep seeing the same thing happen: They end up with 15 tabs open, copy-pasting prompts, and duct-taping everything together with Zapier workflows that quietly break every week. It looks productive, but they’re spending more time managing the AI than running the business. The real leverage isn't about adding more tools or "better" prompts. It’s about Context Architecture. The biggest shift for me was moving my SOPs, meeting notes, and CRM into one centralized "Source of Truth" (I use Notion) and plugging Claude directly into that context. When Claude isn't "guessing" what your business does, the hallucinations disappear and the utility sky-rockets. Here are the 3 specific use cases that saved me 10+ hours this week: 1) The Speed-to-Lead Workflow I stopped starting follow-up emails from scratch. How it works: I record the sales call directly in my workspace. Claude has access to my Brand Voice doc and my Product Guide. The Result: I feed the transcript to Claude, and it drafts a personalized email based on the prospect's actual pain points. It takes 90 seconds to review and hit send. 2) The Zero-Spreadsheet Data Analyst: I don’t do manual data entry for KPI trackers anymore. How it works: During my weekly metrics meetings, I just talk through the numbers: subscribers, CPL, revenue. The Result: Claude reads the meeting transcript, extracts the data points, and updates my database automatically. I haven't manually touched a spreadsheet in a month. 3) The Infinite Context Content Engine: I stopped staring at a blank cursor for LinkedIn/Reddit posts. How it works: I built a "Knowledge Hub" with all my past newsletters and internal notes. The Result: I use a prompt that references that specific internal knowledge. It drafts content that actually sounds like me because it’s referencing my real ideas, not generic LLM "as a leading provider" fluff. The reason people think AI is a "gimmick" is because they’re giving it zero context. When you copy-paste a prompt into a blank window, the AI is just guessing. When your AI can see your brand voice, your products, and your transcripts all in one system, it stops guessing and starts operating. This is from me, guys. I’d love to hear what other business owners are doing with Claude. We should share practical usecases beyond the marketing hype submitted by /u/damonflowers [link] [comments]
View originalOpen source Visual Studio 2026 Extension for Claude
Hi guys, I'm a dotnet senior dev, 25 years of experience building dotnet solutions (https://github.com/adospace). Visual Studio 2026 is great, but let's be honest, its GitHub Copilot extension sucks! In recent years, I ended up using Visual Studio Code and Cursor just because my favorite IDE doesn't have a valid extension for Claude AI. I used Claude AI to build a VS 2026 extension that allows you to use your own Anthropic AI API-KEY, which I now use every day, and really, works great. What do I not like about VS 2026 Copilot and existing OSS extensions? 1) Copilot is focused on helping you with your task, NOT on vibe-coding. In other words, it's hard to spawn and control agents while you work on other things: the UI is ugly, and session management is simply inefficient. 2) Copilot is confined in a tool window, while I'd like to use it in a full document tab 3) Copilot uses PowerShell most of the time to read/write files, explore solutions, etc. Claude is way more efficient using linux tools, like bash, grep, glob, etc. Not mentioning how much AI terminal commands are polluting. My VsAgentic extension is based on these core features: 1) The tool only maintains the list of chat sessions that are linked to the working folder. When you reopen your solution, you get back the sessions related to the solution only. 2) Tools are Linux-based, full access to bash CLI, grep, glob, etc 3) AI can spawn as many agents as it likes, for example, to explore the code base or plan a complex task. 4) Run multiple chat sessions in parallel in full-blown document tabs Of course, still far from perfect, but I'd really like to hear your feedbacks! Please check it out: https://github.com/adospace/vs-agentic submitted by /u/Appropriate-Rush915 [link] [comments]
View originalI built an app where AI agents autonomously create tasks, review each other's work, message each other — while you watch everything happen on a board. Free, open source.
Not regular todo/kanban app (I compared it with the top projects in this space) Anthropic recently added an experimental feature — Agent Teams. You spin up a team of agents that work in parallel, coordinated by a lead agent, while communicating with each other. It's powerful but it all happens in the terminal. It's hard to see what's going on, especially when multiple agents work in parallel. I built a desktop app that gives you a visual layer on top of it. You see all tasks on a kanban board, where they move automatically as agents work. You can review changes per task like in the Cursor, send messages to agents, add comments, attach files and more. A few things I built on top of what Claude Code CLI provides: Real-time kanban board — tasks move across columns automatically as agents work. You can see all your projects and teams in one place. Cross-team communication — agents from different teams can message each other. This doesn't exist in the CLI itself, I implemented it as a custom layer. Makes multi-team workflows actually practical. Built-in review workflow — agents automatically review each other's tasks. You see the full review conversation and results. Solo mode — don't need a full team? Run a single agent with the same visual interface. It auto-creates and tracks its own tasks on the board. Same as plain Claude Code CLI, but you actually see what's happening. Per-task code review — accept/reject individual code hunks, like Cursor's review flow. Leave comments, request changes. Agents pick up your feedback. Convenient task execution logs — see exactly which tools an agent used for each task in a clean visual timeline (not just text). Only shows what's relevant to that specific task. Context monitoring — a six-category breakdown of what consumes tokens at every step. Zero setup — the app installs and configures Claude Code for you. There's also a built-in UI for installing MCP servers, plugins, and skills. Also works as a session viewer — browse and analyze any Claude Code session history across projects. Not the main feature, but handy if you use Claude Code a lot. It's located in the "Sessions" tab in the sidebar. It's 100% free, open source, and runs locally. The app doesn't require any API keys. Here's how it compares to similar tools (full comparison): Feature Claude Agent Teams UI Vibe Kanban Aperant Cursor Claude Code CLI Cross-team communication ✅ ❌ ❌ — ❌ Agent-to-agent messaging ✅ Native mailbox ❌ Isolated ❌ Pipeline ❌ ✅⚠️ No UI Full autonomy ✅ End-to-end ❌ Human manages ❌ Fixed pipeline ⚠️ Isolated ✅⚠️ No UI, teams are ephemeral Hunk-level code review ✅ ❌ ❌ ✅ ❌ Session analysis ✅ ❌ ⚠️ ❌ ❌ Zero setup ✅ ❌ ❌ ✅ ⚠️ Price Free Free / $30 mo Free $0–200/mo Subscription Demo video attached — easier to watch than to read about. GitHub: https://github.com/777genius/claude_agent_teams_ui Site: https://777genius.github.io/claude_agent_teams_ui/ It's been useful for me personally — seeing all tasks across projects in one place and quickly checking what changed has saved me a lot of context switching. Hope it helps someone else too. Happy to answer questions. submitted by /u/IlyaZelen [link] [comments]
View originalThe Vectorized/Semantic 2nd Brain You Know You Need
I started this because from day one, I sensed (like any decent developer or human with half-a-brain) that context engineering alone, or even a decent "saddle" as people are calling it, weren't going to get me where I wanted to go. Around the same time, I discovered my bald brother Nate B. Jones (AI News & Strategy analyst) through a YouTube video he made about creating a "$0.10/month second brain" on Supabase + pgvector + MCP. So yeah... I'm a freaking genius (Claude told me) so I got the basic version running in an afternoon. Then I couldn't stop. The project is cerebellum — a personal, database-backed memory system that speaks MCP, and reads/writes/searches like an LLM (i.e. semantically), so any AI tool (Claude Code, Cursor, ChatGPT, Gemini, whatever ships next year) can query the same memory store without any integration work. One protocol, every engine. I realize in some circles, everyone and their mom is either trying to build something like this, or they're skirting around the idea and just haven't gotten there yet. So, I wasn't going to share it but it's just been so useful for me that it feels wrong not to. So, here's what the architecture of what I've built actually looks like, why it took a lot longer than an afternoon, and the ways in which it may be helpful for you (and different/better than whatever you've been using): Three layers between a raw thought and permanent storage: 1. The Operator (aka "Weaver", "Curator", "Compiler", etc.) Going for a Matrix type name to accompany and try and match the bad-assery of the "Gatekeeper" (see below), but I haven't been able to. Suggestions are encouraged -- this one has been eating at me. Every capture — from the CLI or any AI tool — lands in a buffer/web before it touches the database. The Operator is an LLM running against that buffer (or "crawling", catching, and synthesizing/"sewing" thoughts from the web as I like to imagine) that makes one of three calls: pass-through: complete, self-contained thought → route to the next layer hold: low-signal fragment → sit in the buffer, wait for related captures to arrive synthesise: 2+ buffered entries share a theme → collapse them into one stronger insight, discard the fragments So if I jot three half-baked notes about a decision I'm wrestling with, the Operator catches and holds onto them. When the pattern solidifies, it compiles one coherent thought and routes that downstream. The fragments never reach the database. The whole buffer runs on a serialized async chain so concurrent captures don't corrupt each other, and TTL expiry never silently discards — expired entries route individually if synthesis fails. I'll probably mention it again, but the race conditions and other issues that arose out of building this funnel are definitely the most interesting problems I've faced so far (aside from naming things after the Matrix + brain stuff)... 2. The Gatekeeper What survives the Operator hits a second LLM evaluation. The GK scores each thought 1–10 (Noise → Insight-grade), generates an adversarial note for borderline items, checks for contradictions against existing thoughts in the DB, and flags veto violations — situations where a new capture would contradict a directive I've already marked as inviolable. It outputs a recommendation (keep, drop, improve, or "axiom") and a reformulation if it thinks the thought can be sharper. By the way, axiom is the idiotic neural-esque term I came up with for a permanent directive that bypasses the normal filtering pipeline and tells every future AI session: "this rule is non-negotiable." You can capture one with memo --axiom "..." — it skips the Operator entirely, goes straight to your review queue, and once approved, the Gatekeeper actively flags any future capture that would contradict it. It's not just stored differently, it's enforced differently. TLDR; an axiom is a rule carved in stone, not a note on a whiteboard. A first class thought, if you will. 3. User ("the Architect" 🥸) I have the final say on everything. But I didn't want to have to always give that "say" during the moment I capture a thought. Hence, running memo review walks me through the queue. For each item: score, analysis, the skeptic's note if it's borderline, suggested reformulation. I keep, drop, edit, or promote to axiom. Nothing reaches the database without explicit sign-off. Where is it going? The part I'm most excited about is increasing the scope of cerebellum's observability to make it truly "watchable", so I can take my hands off the wheel (aside from making a final review). The idea: point it at any app — a terminal session, your editor, a browser tab, a desktop app — and have it observe passively. When it surfaces something worth capturing, the Operator handles clustering and synthesis; only what's genuinely signal makes it to the GK queue; I get final say. You could maintain a list of apps cerebellum is watching and tune the TTL and synthesis behavior per s
View originalI built an MCP server & Plugin using Claude code that queries GPT-5, Claude, Gemini, and Grok simultaneously from your IDE — uses your existing $20/mo subscriptions (no API keys needed)
Hey everyone — I've been building https://polydev.ai for the past few months using claude code and wanted to share it. The problem I kept running into: I'd be deep in a coding session in Claude Code, hit a wall where the model keeps hallucinating or giving the same answer — even after I tell it the direction is wrong and it's not solving the issue. So I open ChatGPT in another tab, paste my code, wait for a response, compare it with Claude's answer, then maybe check Gemini too. Rinse and repeat a few times a day. What polydev.ai does: It's an MCP server that sits inside your IDE (Claude Code, Cursor, Windsurf, Cline, Codex CLI) and queries multiple frontier models simultaneously. One request → four expert opinions. When your AI agent gets stuck or wants a second opinion, it calls get_perspectives through polydev.ai and gets responses from GPT-5, Claude, Gemini, and Grok in parallel. Your IDE Agent → polydev.ai MCP → [GPT-5, Claude, Gemini, Grok] → Combined perspectives The best part — no API keys required: If you're already paying for ChatGPT Plus ($20/mo), Claude Pro ($20/mo), or Gemini Advanced ($20/mo), polydev.ai routes requests through your authenticated CLI sessions. Your existing subscription quota is used. Zero extra API cost if you already have the CLIs configured locally. Getting started is one command: npx polydev-ai@latest We're looking for honest feedback — would this be useful for developers working on complex projects? What would make it better? https://reddit.com/link/1runy8q/video/8pydjk7wdapg1/player https://preview.redd.it/ojud5z3ydapg1.png?width=2400&format=png&auto=webp&s=e6ca676d839f2f1fe89ea6b699082263e7e88097 https://preview.redd.it/9i46ozlzdapg1.png?width=2400&format=png&auto=webp&s=d5afc42de362aa979dac9bb8f789bb26408c41db submitted by /u/venkattalks [link] [comments]
View originalI tested Windsurf, Cursor, and Claude Code on the same real project. Here's what actually happened.
Everyone's debating Windsurf vs Cursor right now but nobody's talking about the elephant in the room — Claude Code doesn't even play the same game as both of them, and that changes the whole comparison. Claude Code produced the cleanest, most maintainable output by a wide margin on my C# backend service refactor. Clear separation of concerns, no security shortcuts, meaningful error handling. It also asked the most clarifying questions upfront which felt slow at first but saved me hours of debugging later. The 1M token context window is genuinely useful on larger codebases where neither IDE can load enough at once. The catch: terminal-only, zero autocomplete, real learning curve. Not a Cursor replacement. A different tool entirely. Windsurf's Cascade was the most fun to use. Genuinely autonomous — reads the files it thinks it needs, makes multi-file changes, asks for confirmation on ambiguous cases. The Live Preview feature where changes are written to disk before you accept them is legitimately great for UI work. $15/mo is hard to argue with. The concern nobody's saying loudly enough though: three ownership changes in three months. OpenAI deal collapsed, Google hired the founders, Cognition acquired the product. The founding team is gone. For personal projects fine, but I'd be cautious building serious team workflows around it. I Tested Windsurf, Cursor, and Claude Code on the Same Project. Claude Code Won — But Not in the Way I Expected. | by Himansh | Mar, 2026 | Medium Cursor is still the most complete package. Best-in-class tab autocomplete noticeably faster than Windsurf, 8 parallel background agents, the most mature MCP ecosystem, and .cursor rules for keeping the AI consistent with your project conventions. 1M+ users means there's always a thread with your exact problem already answered. $5/mo more than Windsurf which for most developers is irrelevant but for teams of 10+ adds up. The actual answer in 2026 is that most developers I know are running two or three of these simultaneously. Cursor for daily inline edits, Claude Code for complex architectural sessions, Windsurf when you want full agent autonomy without babysitting the AI. They're not mutually exclusive — Cursor and Windsurf sit at the IDE level, Claude Code sits at the terminal level. Curious what everyone here is actually running. Are you Claude Code only? Still on Cursor? Has anyone switched from Cursor to Windsurf full time and actually stuck with it? submitted by /u/Remarkable-Dark2840 [link] [comments]
View originalI built a bridge (claude-ide-bridge) that gives the Claude Code CLI full integration with Cursor/VS Code (115+ tools)
https://preview.redd.it/mmoyeokhlsog1.png?width=1200&format=png&auto=webp&s=b2cb8ecd38c16bd2e0081a234169b53f62e02a1b The best part about the new Claude Code CLI isn't just the AI—it's that you can run it over SSH. My favorite workflow right now is SSHing into my dev machine from my phone, running Claude Code from the couch, and watching the code write, debug, and test itself on my monitor. But I realized it was missing a lot of context to make that remote workflow perfect: it couldn't see my open tabs, read LSP diagnostics, or interact with the debugger. I ended up building an open-source MCP bridge to fix this. It’s a standalone server that talks to your editor via a WebSocket extension, effectively giving the CLI full access to your IDE state. Now, even when I'm miles away from my keyboard, Claude can actually: Autonomously trigger the VS Code debugger, set breakpoints, and evaluate expressions when my tests fail. Use LSP to find references and go to definitions instead of just regex-searching codebase text. See exactly what text I currently have highlighted (or what tab is active) in my editor. I also added custom slash commands (like /ide-debug and /ide-review) that run specialized sub-agents natively. It supports VS Code, Windsurf, and Cursor. I'd love to hear what workflows you guys would use this for, or if you have ideas for other MCP tools to add! (Links to the GitHub repo and NPM are in the first comment below!) submitted by /u/wesh-k [link] [comments]
View originalBased on 15 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.