Based on the limited social mentions available, AutoGen appears to be recognized as one of the established AI agent frameworks in a competitive landscape that includes LangGraph, CrewAI, and others. Users seem to acknowledge it as part of the mainstream options when evaluating multi-agent systems, though the mentions suggest the space is rapidly evolving with new frameworks emerging frequently. There's an indication that debugging and observability remain challenging aspects of working with AutoGen and similar multi-agent frameworks. However, the provided data is too limited to assess specific user strengths, complaints, pricing sentiment, or overall satisfaction with AutoGen.
Mentions (30d)
1
Reviews
0
Platforms
3
GitHub Stars
56,499
8,492 forks
Based on the limited social mentions available, AutoGen appears to be recognized as one of the established AI agent frameworks in a competitive landscape that includes LangGraph, CrewAI, and others. Users seem to acknowledge it as part of the mainstream options when evaluating multi-agent systems, though the mentions suggest the space is rapidly evolving with new frameworks emerging frequently. There's an indication that debugging and observability remain challenging aspects of working with AutoGen and similar multi-agent frameworks. However, the provided data is too limited to assess specific user strengths, complaints, pricing sentiment, or overall satisfaction with AutoGen.
Industry
information technology & services
Employees
2
116,169
GitHub followers
7,713
GitHub repos
56,499
GitHub stars
20
npm packages
40
HuggingFace models
34
npm downloads/wk
174,093
PyPI downloads/mo
EVAL #004: AI Agent Frameworks — LangGraph vs CrewAI vs AutoGen vs Smolagents vs OpenAI Agents SDK
Every week there's a new AI agent framework on Hacker News. The GitHub stars pile up, the demo videos...
View originalEVAL #004: AI Agent Frameworks — LangGraph vs CrewAI vs AutoGen vs Smolagents vs OpenAI Agents SDK
Every week there's a new AI agent framework on Hacker News. The GitHub stars pile up, the demo videos...
View originalShow HN: AgentLens – Open-source observability for AI agents
Hi HN,<p>I built AgentLens because debugging multi-agent systems is painful. LangSmith is cloud-only and paid. Langfuse tracks LLM calls but doesn't understand agent topology — tool calls, handoffs, decision trees.<p>AgentLens is a self-hosted observability platform built specifically for AI agents:<p>- *Topology graph* — see your agent's tool calls, LLM calls, and sub-agent spawns as an interactive DAG - *Time-travel replay* — step through an agent run frame-by-frame with a scrubber timeline - *Trace comparison* — side-by-side diff of two runs with color-coded span matching - *Cost tracking* — 27 models priced (GPT-4.1, Claude 4, Gemini 2.0, etc.) - *Live streaming* — watch spans appear in real-time via SSE - *Alerting* — anomaly detection for cost spikes, error rates, latency - *OTel ingestion* — accepts OTLP HTTP JSON, so any OTel-instrumented app works<p>Works with LangChain, CrewAI, AutoGen, LlamaIndex, and Google ADK.<p>Tech: React 19 + FastAPI + SQLite/PostgreSQL. MIT licensed. 231 tests, 100% coverage.<p><pre><code> docker run -p 3000:3000 tranhoangtu/agentlens-observe:0.6.0 pip install agentlens-observe </code></pre> Demo GIF and screenshots in the README.<p>GitHub: <a href="https://github.com/tranhoangtu-it/agentlens-observe" rel="nofollow">https://github.com/tranhoangtu-it/agentlens-observe</a> Docs: <a href="https://agentlens-observe.pages.dev" rel="nofollow">https://agentlens-observe.pages.dev</a><p>I'd love feedback on the trace visualization approach and what features matter most for your agent debugging workflow.
View originalRepository Audit Available
Deep analysis of microsoft/autogen — architecture, costs, security, dependencies & more
AutoGen has a public GitHub repository with 56,499 stars.
Based on user reviews and social mentions, the most common pain points are: cost tracking.