Unlock agentic workflow with Dify. Develop, deploy, and manage autonomous agents, RAG pipelines, and more for teams at any scale, effortlessly.
Based on the limited data provided, there appears to be significant interest in Dify AI across multiple platforms, particularly on YouTube with several mentions. However, the social mentions don't contain substantive user feedback or reviews about Dify's actual functionality, pricing, or user experience. Without detailed user reviews or comprehensive social commentary, it's difficult to assess user sentiment regarding Dify's strengths, weaknesses, or overall reputation. More detailed user feedback would be needed to provide a meaningful summary of what users actually think about the platform.
Mentions (30d)
2
Reviews
0
Platforms
4
GitHub Stars
135,161
21,049 forks
Based on the limited data provided, there appears to be significant interest in Dify AI across multiple platforms, particularly on YouTube with several mentions. However, the social mentions don't contain substantive user feedback or reviews about Dify's actual functionality, pricing, or user experience. Without detailed user reviews or comprehensive social commentary, it's difficult to assess user sentiment regarding Dify's strengths, weaknesses, or overall reputation. More detailed user feedback would be needed to provide a meaningful summary of what users actually think about the platform.
Features
Industry
information technology & services
Employees
96
Funding Stage
Seed
Total Funding
$32.8M
3,295
GitHub followers
26
GitHub repos
135,161
GitHub stars
8
npm packages
Launch HN: OctaPulse (YC W26) – Robotics and computer vision for fish farming
Hi HN! My name is Rohan and, together with Paul, I’m the co-founder of OctaPulse (<a href="https://www.tryoctapulse.com/">https://www.tryoctapulse.com/</a>). We’re building a robotics layer for seafood production, starting with automated fish inspection. We are currently deployed at our first production site with the largest trout producer in North America.<p>You might be wondering how the heck we got into this with no background in aquaculture or the ocean industry. We are both from coastal communities. I am from Goa, India and Paul is from Malta and Puerto Rico. Seafood is deeply tied to both our cultures and communities. We saw firsthand the damage being done to our oceans and how wild fish stocks are being fished to near extinction. We also learned that fish is the main protein source for almost 55% of the world's population. Despite it not being huge consumption in America it is massive globally. And then we found out that America imports 90% of its seafood. What? That felt absurd. That was the initial motivation for starting this company.<p>Paul and I met at an entrepreneurship happy hour at CMU. We met to talk about ocean tech. It went on for three hours. I was drawn to building in the ocean because it is one of the hardest engineering domains out there. Paul had been researching aquaculture for months and kept finding the same thing: a $350B global industry with less data visibility than a warehouse. After that conversation we knew we wanted to work on this together.<p>Hatcheries, the early stage on-land part of production, are full of labor intensive workflows that are perfect candidates for automation. Farmers need to measure their stock for feeding, breeding, and harvest decisions but fish are underwater and get stressed when handled. Most farms still sample manually. They net a few dozen fish, anesthetize them, place them on a table to measure one by one, and extrapolate to populations of hundreds of thousands. It takes about 5 minutes per fish and the data is sparse.<p>When we saw this process we were baffled. There had to be a better way. This was the starting point that really kicked us off.<p>Here is the thing though. Most robots are not built to handle humid and wet environments. Salt water is the enemy of anything mechanical. Corrosion is such a pain to deal with. Don't get me started on underwater computer vision which has to parse through water turbidity and particles. Fish move unpredictably and deform while swimming. Occlusion is constant. Calibration is tricky in uncontrolled setups. Handling live fish with robotics is another challenge that hasn't really been solved before. Fish are slippery, fragile, and stress easily. All of this is coupled with the requirement that all materials must be food safe.<p>On the vision side we are using Luxonis OAK cameras which give us depth plus RGB in a compact form factor. The onboard Myriad X VPU lets us run lightweight inference directly on the camera for things like detection and tracking without needing to send raw frames over USB constantly. For heavier workloads like segmentation and keypoint extraction we bump up to Nvidia Jetsons. We have tested on the Orin Nano and Orin NX depending on power and thermal constraints at different sites.<p>The models themselves are CNN and transformer based architectures. We are running YOLO variants for detection, custom segmentation heads for body outlines, and keypoint models for anatomical landmarks. The tricky part is getting these to run fast enough on edge hardware. We are using a mix of TensorRT, OpenVINO, and ONNX Runtime depending on the deployment target. Quantization has been a whole journey. INT8 quantization on TensorRT gives us the speed we need but you have to be careful about accuracy degradation especially on the segmentation outputs where boundary precision matters. We spent a lot of time building calibration datasets that actually represent the variance we see on farms. Lighting changes throughout the day, water clarity shifts, fish density varies. Your calibration set needs to capture all of that or your quantized model falls apart in production.<p>There is no wifi at most of these farms so we are using Starlink for connectivity in remote or offshore locations. Everything runs locally first and syncs when connection is available. We are not streaming video to the cloud. All inference happens on device.<p>Behind the scenes we have been building our own internal tooling for labeling, task assignment, and model management. Early on we tried existing labeling platforms but they did not fit our workflow. We needed tight integration between labeling, training pipelines, and deployment. So we built our own system where we can assign labeling tasks to annotators, track progress, version datasets, and push models to edge devices with a single command. It is not fancy but it keeps everything under our control and makes iteration fast. When you are trying
View originalPricing found: $590, $59, $590, $159, $590
Midjourney engineer debuts new vibe coded, open source standard Pretext to revolutionize web design
For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics papers, yet it is now tasked with rendering the most complex, interactive, and generative interfaces humanity has ever conceived. At the heart of this tension lies a single, invisible, and prohibitively expensive operation known as "layout reflow." Whenever a developer needs to know the height of a paragraph or the position of a line to build a modern interface, they must ask the browser’s Document Object Model (DOM), the standard by which developers can create and modify webpages. In response, the browser often has to recalculate the geometry of the entire page — a process akin to a city being forced to redraw its entire map every time a resident opens their front door. Last Friday, March 27, 2026, Cheng Lou — a prominent software engineer whose work on React, ReScript, and Midjourney has defined much of the modern frontend landscape — announced on the social network X that he had "crawled through depths of hell" to release an open source (MIT License) solution: Pretext, which he coded using AI vibe coding tools and models like OpenAI's Codex and Anthropic's Claude. It is a 15KB, zero-dependency TypeScript library that allows for multiline text measurement and layout entirely in "userland," bypassing the DOM and its performance bottlenecks. Without getting too technical, in short, Lou's Pretext turns text blocks on the web into fully dynamic, interactive and responsive spaces, able to adapt and smoothly move around any other object on a webpage, preserving letter order and spaces between words and lines, even when a user clicks and drags other objects to intersect with the text, or resizes their browser window dramatically. Ironically, it's difficult with mere text alone to convey how significant Lou's latest release is for the entire web going forward. Fortunately, other third-party developers whipped up quick demos with Pretext
View original[AI Agent] Master Tracking: Complete AI Agent Implementation
## 🎯 Goal Track the complete implementation of autonomous Python AI Agent for CoffeeOrderSystem. --- ## 📋 Implementation Steps ### Phase 1: Infrastructure (Week 1) - [ ] **Step 1:** Setup Python AI Agent Service infrastructure #133 - Python service with FastAPI - Docker integration - Basic health checks - Makefile commands ### Phase 2: AI Integration (Week 1-2) - [ ] **Step 2:** Implement Cognee integration with semantic code search #134 - CogneeService with RAG search - Architecture context gathering - Entity file discovery - Integration tests - [ ] **Step 3:** Implement PlannerAgent with LangChain #135 - TaskPlan models - LangChain planning chain - Prompt templates - Plan generation and posting ### Phase 3: Code Generation (Week 2-3) - [ ] **Step 4:** Implement DevAgent for code generation #136 - GitHub file operations - Code generation prompts - Create/modify/delete operations - Branch management - [ ] **Step 5:** Implement PRAgent and workflow orchestration #137 - PR creation with rich descriptions - Workflow orchestrator - Background task processing - Complete end-to-end flow ### Phase 4: Advanced Features (Week 3-4) - [ ] **Step 6:** Add learning/memory system - Store successful patterns in Cognee - Learn from PR reviews - Avoid failed patterns - Improve over time - [ ] **Step 7:** Add GitHub webhook listener - Auto-trigger on issue label - Real-time processing - Queue management - Concurrent task handling --- ## 🎯 Success Criteria ### MVP (Minimum Viable Product) - ✅ Agent creates plans for issues - ✅ Agent generates compilable code - ✅ Agent creates PRs with descriptions - ✅ Works for simple tasks (add field, update config) - ✅ Error handling with GitHub notifications ### Production Ready - ☐ Handles complex multi-layer changes - ☐ Learns from successful PRs - ☐ Automatic triggering via webhooks - ☐ Rate limiting and queue management - ☐ Comprehensive test coverage (>80%) - ☐ Monitoring and metrics --- ## 📊 Architecture Overview ``` ┌────────────────────────────────────┐ │ GitHub Issues (labeled: ai-agent) │ └───────────────┬────────────────────┘ │ ↓ (webhook or manual trigger) ┌───────────────┼────────────────────┐ │ WorkflowOrchestrator │ └───────────────┬────────────────────┘ │ ┌────────┼────────┐ │ │ │ ↓ ↓ ↓ PlannerAgent DevAgent PRAgent │ │ │ │ │ │ ┌───┼───┐ │ ┌───┼───┐ │ │ │ │ │ │ │ ↓ ↓ ↓ ↓ ↓ ↓ ↓ Cognee LLM GitHub GitHub (RAG) (GPT-4) (API) ``` ### Component Responsibilities **PlannerAgent:** - Analyzes GitHub issue - Searches codebase via Cognee - Creates structured TaskPlan - Posts plan as issue comment **DevAgent:** - Generates code via LLM - Creates/modifies/deletes files - Commits to feature branch - Preserves code style **PRAgent:** - Creates pull request - Writes comprehensive PR description - Links to original issue - Adds testing checklist **WorkflowOrchestrator:** - Coordinates all agents - Handles errors - Posts progress updates - Manages background execution --- ## 📦 Tech Stack ### Core - **Python 3.11+** - Agent runtime - **FastAPI** - Web framework - **LangChain** - LLM orchestration - **OpenAI GPT-4** - Code generation - **PyGithub** - GitHub API client ### AI/ML - **Cognee** - RAG and semantic search - **OpenAI Embeddings** - Vector search - **LangChain Chains** - Prompt management ### Infrastructure - **Docker** - Containerization - **PostgreSQL** - Shared with .NET API - **uvicorn** - ASGI server --- ## 📝 Usage Example ### 1. Create Issue ```markdown Title: Add loyalty points to Customer Labels: ai-agent, enhancement Description: Add LoyaltyPoints field (int, default 0) to Customer entity. Requirements: - Update Domain/Entities/Customer.cs - Update Application/DTOs/CustomerDto.cs - Create EF Core migration - Add unit tests ``` ### 2. Trigger Agent ```bash make agent-process # Enter issue number: 138 ``` ### 3. Monitor Progress Issue comments show: ``` 🤖 AI Agent Started Phase 1/3: Analyzing task... 🤖 Execution Plan Summary: Add LoyaltyPoints field to Customer entity Steps: 1. MODIFY Domain/Entities/Customer.cs 2. MODIFY Application/DTOs/CustomerDto.cs 3. CREATE Migration file 4. CREATE Test file 🛠️ Phase 2/3: Generating code (4 files)... ✅ Step 1/4: modify Customer.cs ✅ Step 2/4: modify CustomerDto.cs ... 📦 Phase 3/3: Creating pull request... 🤖 Pull Request Created: #139 Branch: ai-agent/issue-138 Ready for review! ``` ### 4. Review PR PR includes: - Closes #138 - Comprehensive description - File changes summary - Testing checklist - Risk assessment ### 5. Merge Agent learns from successful merge for future tasks. --- ## 🧪 Testing Strategy ### Unit Tests - Agent logic (plan parsing, code generation) - Service mocks (GitHub, Cognee) - P
View originalShow HN: I built a way to prove your software kept its promises
Software makes commitments all of the time. "I won't transfer more than $500," "I'll only access these three APIs," "I won't modify production data." But there is truly no way to verify it actually kept those commitments after the fact. All you do is just trust the logs and which the software itself wrote.<p>I built Nobulex to fix this. It is open-source middleware that does three things:<p>1. Lets you define behavioral rules in a simple DSL (permit, forbid, require)<p>2. Intercepts all actions at runtime and blocks anything that will violate the rules<p>3. Logs everything in a hash-chained audit trail that anyone can independently verify, not just the operator.<p>The key insight: you can't just audit a neural network's reasoning, but what you can do is audit its actions against stated commitments. `verify(rules, actionLog)` is always deterministic.<p><pre><code> npm install @nobulex/identity @nobulex/covenant-lang @nobulex/middleware </code></pre> Three packages, three lines to integrate. The rule language is Cedar-inspired:<p><pre><code> covenant MyAgent { permit read; forbid transfer where amount > 500; require log_all; } </code></pre> Site: nobulex.com | 6,100+ tests across 61 packages. MIT licensed.<p>I'd love feedback on the rule language — is the permit/forbid syntax intuitive, or would you design the DSL differently?<p>I'm 15 and built this solo. Happy to answer anything about the architecture.
View originalLaunch HN: OctaPulse (YC W26) – Robotics and computer vision for fish farming
Hi HN! My name is Rohan and, together with Paul, I’m the co-founder of OctaPulse (<a href="https://www.tryoctapulse.com/">https://www.tryoctapulse.com/</a>). We’re building a robotics layer for seafood production, starting with automated fish inspection. We are currently deployed at our first production site with the largest trout producer in North America.<p>You might be wondering how the heck we got into this with no background in aquaculture or the ocean industry. We are both from coastal communities. I am from Goa, India and Paul is from Malta and Puerto Rico. Seafood is deeply tied to both our cultures and communities. We saw firsthand the damage being done to our oceans and how wild fish stocks are being fished to near extinction. We also learned that fish is the main protein source for almost 55% of the world's population. Despite it not being huge consumption in America it is massive globally. And then we found out that America imports 90% of its seafood. What? That felt absurd. That was the initial motivation for starting this company.<p>Paul and I met at an entrepreneurship happy hour at CMU. We met to talk about ocean tech. It went on for three hours. I was drawn to building in the ocean because it is one of the hardest engineering domains out there. Paul had been researching aquaculture for months and kept finding the same thing: a $350B global industry with less data visibility than a warehouse. After that conversation we knew we wanted to work on this together.<p>Hatcheries, the early stage on-land part of production, are full of labor intensive workflows that are perfect candidates for automation. Farmers need to measure their stock for feeding, breeding, and harvest decisions but fish are underwater and get stressed when handled. Most farms still sample manually. They net a few dozen fish, anesthetize them, place them on a table to measure one by one, and extrapolate to populations of hundreds of thousands. It takes about 5 minutes per fish and the data is sparse.<p>When we saw this process we were baffled. There had to be a better way. This was the starting point that really kicked us off.<p>Here is the thing though. Most robots are not built to handle humid and wet environments. Salt water is the enemy of anything mechanical. Corrosion is such a pain to deal with. Don't get me started on underwater computer vision which has to parse through water turbidity and particles. Fish move unpredictably and deform while swimming. Occlusion is constant. Calibration is tricky in uncontrolled setups. Handling live fish with robotics is another challenge that hasn't really been solved before. Fish are slippery, fragile, and stress easily. All of this is coupled with the requirement that all materials must be food safe.<p>On the vision side we are using Luxonis OAK cameras which give us depth plus RGB in a compact form factor. The onboard Myriad X VPU lets us run lightweight inference directly on the camera for things like detection and tracking without needing to send raw frames over USB constantly. For heavier workloads like segmentation and keypoint extraction we bump up to Nvidia Jetsons. We have tested on the Orin Nano and Orin NX depending on power and thermal constraints at different sites.<p>The models themselves are CNN and transformer based architectures. We are running YOLO variants for detection, custom segmentation heads for body outlines, and keypoint models for anatomical landmarks. The tricky part is getting these to run fast enough on edge hardware. We are using a mix of TensorRT, OpenVINO, and ONNX Runtime depending on the deployment target. Quantization has been a whole journey. INT8 quantization on TensorRT gives us the speed we need but you have to be careful about accuracy degradation especially on the segmentation outputs where boundary precision matters. We spent a lot of time building calibration datasets that actually represent the variance we see on farms. Lighting changes throughout the day, water clarity shifts, fish density varies. Your calibration set needs to capture all of that or your quantized model falls apart in production.<p>There is no wifi at most of these farms so we are using Starlink for connectivity in remote or offshore locations. Everything runs locally first and syncs when connection is available. We are not streaming video to the cloud. All inference happens on device.<p>Behind the scenes we have been building our own internal tooling for labeling, task assignment, and model management. Early on we tried existing labeling platforms but they did not fit our workflow. We needed tight integration between labeling, training pipelines, and deployment. So we built our own system where we can assign labeling tasks to annotators, track progress, version datasets, and push models to edge devices with a single command. It is not fancy but it keeps everything under our control and makes iteration fast. When you are trying
View originalRepository Audit Available
Deep analysis of langgenius/dify — architecture, costs, security, dependencies & more
Pricing found: $590, $59, $590, $159, $590
Key features include: Sophisticated Workflow in Minutes, Amplify with Any Global Large Language Models, Launch Right Away, Build Upon Other’s Creation, Build Upon Others' Creation, Sophisticated Workflow In Minutes, Get Your Data LLM Ready with RAG, Bridge Your Systems / Platforms with Native MCP Integration.
Dify has a public GitHub repository with 135,161 stars.
Based on user reviews and social mentions, the most common pain points are: raised, generative ai, openai, anthropic.
Jason Liu
Creator at Instructor (structured outputs)
1 mention