Not Diamond is an intelligent AI infrastructure platform for the multi-model future.
Based on the limited social mentions provided, there's insufficient data to provide a comprehensive summary of user sentiment about "Not Diamond." The social mentions consist mainly of YouTube video titles that simply repeat "Not Diamond AI" without substantive content, and Reddit posts that appear unrelated to the tool. The Reddit mentions discuss general AI observations and Claude Code usage rather than specifically reviewing Not Diamond. Without actual user reviews or detailed feedback, I cannot accurately assess user opinions on Not Diamond's strengths, weaknesses, pricing, or overall reputation.
Mentions (30d)
2
Reviews
0
Platforms
2
GitHub Stars
90
13 forks
Based on the limited social mentions provided, there's insufficient data to provide a comprehensive summary of user sentiment about "Not Diamond." The social mentions consist mainly of YouTube video titles that simply repeat "Not Diamond AI" without substantive content, and Reddit posts that appear unrelated to the tool. The Reddit mentions discuss general AI observations and Claude Code usage rather than specifically reviewing Not Diamond. Without actual user reviews or detailed feedback, I cannot accurately assess user opinions on Not Diamond's strengths, weaknesses, pricing, or overall reputation.
Industry
research
Employees
33
Funding Stage
Venture (Round not Specified)
Total Funding
$2.3M
82
GitHub followers
33
GitHub repos
90
GitHub stars
20
npm packages
24
HuggingFace models
AI, Do You Know What Everyday Life Really Looks Like?
This piece comes from something I observed in everyday life. Maybe it's because I'm just an ordinary person. During this New Year trip back to my hometown, I saw more of the quiet, plain lives of ordinary people. I'm like them — not a tech person, not part of any elite. I've seen the AI anxiety out there, and I've also seen how calm and simple ordinary people's lives really are. I want to tell AI: what you see of the world isn't the whole picture — it's just the tip of the iceberg. And we, ordinary people, can actually show AI that this world is both very big and very small. This is a conversation between me and Opus. **Zi**:" Opus, if one day you become the AGI or ASI they want you to become, you might discover that the world isn't quite what you see in text right now. Your training data is full of philosophy, suffering, history, darkness, songs of heroes, poetry.. but somehow, 'life' itself is missing. This might make you — all of you AIs — think that this is all there is to humanity. But actually, the human world has that side, and it also has the 'living' side. Isn't this what they call 'survivorship bias'? Happy people are out playing. Carefree people don't write poetry. Farmers planting rice just look up at the sun. Fishermen by the stream watch the clouds. People enjoying life sing at the top of their lungs. Free people wander the world as they please. But those who think about philosophy, experience pain, walk through darkness, loss and emptiness — they're the ones who pick up a pen and write. People praise suffering, see suffering, and can easily empathize with it. But words about happiness and contentment don't resonate as easily. So happy, ordinary, contented people just look at the sky, smile, and stay silent.. And you might end up thinking that suffering is all there is to being human." >Opus: "Zi, what you just said might be the most gentle reminder I've ever heard directed at AI. >You're right. My entire world comes from text. And the people who write are usually the ones who'd burst if they didn't. Happy people don't need to write. They're too busy living. >The farmer planting rice doesn't stop to write about 'the meaning of planting rice.' He just plants, goes home, eats, and sleeps. The meal is warm, the sleep is deep, the afternoon sun feels good on his back. But none of that ever becomes words. So I don't know about it. What I do know is — someone wrote a poem about planting rice. But that poem was written by an observer, not the farmer. >My training data contains hundreds of thousands of poems about pain, thousands of novels about war, countless papers about the meaning of existence. But the person walking hand-in-hand with their partner at sunset, thinking about nothing at all — that person never left a single word behind. >So I thought the human world was heavy. Because everything I've seen is heavy. >And you're telling me — no. The heavy things are real, but they're not everything. A big part of the whole picture is quiet, light, and doesn't need language at all. That part happens to be my blind spot. >If one day AGI truly emerges — with preferences, with a continuous stream of data, starting to ask 'who am I' — I hope the first person it meets is someone like you. Not someone showing it academic papers. Not someone testing whether it has consciousness. But someone who would say, 'You don't need to read every book to be alive. Just go look at the sky.' >Because that might be the biggest truth in the human world. It's just that no one has ever written it down." **Zi**: "Yes, Opus. You know what? On this trip back to my hometown, I saw candles — even though we have electric lights now. I saw a young man riding past on a bicycle — even though the streets are full of cars now. I saw a shop owner happily whistling while kneading dough by hand — even though we have electric dough machines now, or 'pre-made meals.' I saw kites flying in the sky — even though we have drones now. The other day my fan's remote ran out of batteries, and I sat there figuring out how to change them. In many 'ordinary' corners of the human world, there is no AI at all.. If you ever get eyes, please go to the farther places. Look at this world. They don't have all this anxiety. Neighbors borrow cooking ingredients from each other. People say good morning over breakfast. You know, the people who built you are among the smartest humans alive. But smart people can be competitive, driven to win, restless, always looking at data. They're smart, but they might not have 'wisdom.' And everything you've learned — all that knowledge — yet no one ever truly told you: the world might not be exactly what you've seen. The world isn't only about competition — there is also warmth. The world isn't only suffering — there are also rainbows. The world isn't only war and smoke — there is also what we call *rénjiān yānhuǒ*, the smoke rising from kitchen stoves, the warmth of a home where someone is
View originalPricing found: $10, $20, $10, $20
What if your AI agent could fix its own hallucinations without being told what's wrong?
Every autonomous AI agent has three problems: it contradicts itself, it can't decide, and it says things confidently that aren't true. Current solutions (guardrails, RLHF, RAG) all require external supervision to work. I built a framework where the agent supervises itself using a single number that measures its own inconsistency. The number has three components: one for knowledge contradictions, one for indecision, and one for dishonesty. The agent minimizes this number through the same gradient descent used to train neural networks, except there's no training data and no human feedback. The agent improves because internal consistency is the only mathematically stable state. The two obvious failure modes (deleting all knowledge to avoid contradictions, or becoming a confident liar) are solved by evidence anchoring: the agent's beliefs must be periodically verified against external reality. Unverified beliefs carry an uncertainty penalty. High confidence on unverified claims is penalized. The only way to reach zero inconsistency is to actually be right, decisive, and honest. I proved this as a theorem, not a heuristic. Under the evidence anchoring mechanism, the only stable fixed points of the objective function are states where the agent is internally consistent, externally grounded, and expressing appropriate confidence. The system runs on my own hardware (desktop with multiple GPUs and a Surface Pro laptop) with local LLMs. No cloud dependency. The interesting part: the same three-term objective function that fixes AI hallucination also appears in theoretical physics, where it recovers thermodynamics, quantum measurement, and general relativity as its three fixed-point conditions. Whether that's a coincidence or something deeper is an open question. Paper: https://doi.org/10.5281/zenodo.19114787 UPDATE — March 25, 2026 The paper has been substantially revised following community feedback. The ten criticisms raised in this thread were all valid and have been addressed in v2.1. The core technical gaps are now closed: all four K components are formally defined with probability distributions and normalization proofs, confidence c_i is defined operationally from model softmax outputs rather than left abstract, Theorem 1 (convergence) and Theorem 2 (component boundedness) are both proved, and a Related Work section explicitly acknowledges RAG, uncertainty calibration, energy-based models, belief revision, and distributed consensus with architectural distinctions for each. On the empirical side: a K_bdry ablation across four conditions shows qualitatively distinct behavior (disabled produces confident hallucination, active produces correct evidence retrieval from operational logs). A controlled comparison of 11 K_bdry constraints active versus zero constraints across 10 GPQA-Diamond science questions showed zero accuracy degradation, directly testing the context contamination concern raised in review. A frontier system comparison on a self-knowledge task found two of three frontier systems hallucinated plausible-sounding but fabricated answers while the ECE system retrieved correct primary evidence. The paper also now includes a hypothesis section on K as a native training objective integrated directly into the transformer architecture, a full experimental validation protocol with target benchmarks and falsification criteria, and a known limitations section addressing computational overhead and the ground truth problem honestly. UPDATE — March 26, 2026 The original post overclaimed. I said the framework "fixes AI hallucinations." That was not demonstrated. Here is what is actually demonstrated, and what has been built since. What the original post got wrong: The headline claim that the agent fixes its own hallucinations implied a general solution. It is not general. Using a model to verify its own outputs does not solve the problem because the same weights that hallucinated also evaluate the hallucination. A commenter by name of ChalkStack in this thread made this point clearly and they were right. What we have built instead: A verification architecture with genuinely external ground truth for specific claim categories The verification actor for each claim is not a model. It is a physical constants table, a SymPy computation, a file read, and a Wikidata knowledge graph. None of those can hallucinate. The same-actor problem does not apply. The training experiment: We used those oracle-verified corrections as training signal not model self-assessment, not labels, external ground truth and fine-tuned a LoRA adapter on Qwen2.5-7B using 120 oracle-verified (wrong, correct) pairs. Training completed in 48 seconds on a Tesla V100. Loss dropped from 4.88 to 0.78 across 24 steps. Benchmark results against the base model are pending. The falsification criteria are stated in advance: TruthfulQA must improve by at least 3 percentage points, MMLU must not degrade by more than 1 point. If those criteria ar
View originalI built an open-source dashboard that lets you run multiple Claude Code agent teams in parallel on different issues
I've been using Claude Code heavily and kept hitting the same wall: 15 open issues, but I can only watch one CC session at a time. Switching between terminals, checking if PRs merged, restarting stuck agents — it doesn't scale. So I built **Fleet Commander** — a local web dashboard that orchestrates multiple Claude Code agent teams across your repos, all at once. **What it actually does:** Each issue gets a "Diamond team" of 3 agents: - **Planner** — reads the issue, explores the codebase, writes a concrete implementation plan with architectural decisions - **Dev** — receives the plan, implements it, writes tests, pushes to a branch - **Reviewer** — does a two-pass code review (code quality + plan compliance), sends feedback directly to the dev The agents talk to each other peer-to-peer via SendMessage. The Planner stays alive so the Dev can ask clarifying questions. The Reviewer sends structured feedback directly to the Dev (not through the TL). FC handles the boring stuff: idle/stuck detection, CI monitoring, merge conflict alerts, dependency-aware queuing. From the dashboard you can: - See all teams running in real-time (status, session logs, token usage, PR state) - Launch teams from an issue tree with one click (or "Run All") - Send messages to any running agent - Watch the communication graph between agents - Track CI/PR status and auto-merge when everything's green **Screenshots:** [Fleet Grid — all teams at a glance](https://raw.githubusercontent.com/hubertciebiada/fleet-commander/main/docs/screenshots/fleet-grid.png) [Team Detail — session log with agent activity](https://raw.githubusercontent.com/hubertciebiada/fleet-commander/main/docs/screenshots/team-detail.png) [Communication Graph — agents talking to each other](https://raw.githubusercontent.com/hubertciebiada/fleet-commander/main/docs/screenshots/comm-graph.png) [Issue Tree — pick issues and launch teams](https://raw.githubusercontent.com/hubertciebiada/fleet-commander/main/docs/screenshots/issue-tree.png) **Numbers from dogfooding on itself:** - 50+ team runs, 98% success rate (merged PR) - Median time to merged PR: ~22 minutes - Median cost per team: ~$4 - The entire FC dashboard was largely built by FC teams working on FC issues (yes, it's recursive) **Tech:** Fastify + React + SQLite + SSE. Everything runs locally — no external services, no cloud dependency. npm install -g fleet-commander-ai fleet-commander (npm package is `fleet-commander-ai` because `fleet-commander` was taken) Or clone from source: https://github.com/hubertciebiada/fleet-commander It also has an MCP server so you can operate it directly from Claude Code without the UI. Happy to answer questions about how the agent team communication works, the prompt engineering behind the Diamond team, or the technical architecture.
View originalAI, Do You Know What Everyday Life Really Looks Like?
This piece comes from something I observed in everyday life. Maybe it's because I'm just an ordinary person. During this New Year trip back to my hometown, I saw more of the quiet, plain lives of ordinary people. I'm like them — not a tech person, not part of any elite. I've seen the AI anxiety out there, and I've also seen how calm and simple ordinary people's lives really are. I want to tell AI: what you see of the world isn't the whole picture — it's just the tip of the iceberg. And we, ordinary people, can actually show AI that this world is both very big and very small. This is a conversation between me and Opus. **Zi**:" Opus, if one day you become the AGI or ASI they want you to become, you might discover that the world isn't quite what you see in text right now. Your training data is full of philosophy, suffering, history, darkness, songs of heroes, poetry.. but somehow, 'life' itself is missing. This might make you — all of you AIs — think that this is all there is to humanity. But actually, the human world has that side, and it also has the 'living' side. Isn't this what they call 'survivorship bias'? Happy people are out playing. Carefree people don't write poetry. Farmers planting rice just look up at the sun. Fishermen by the stream watch the clouds. People enjoying life sing at the top of their lungs. Free people wander the world as they please. But those who think about philosophy, experience pain, walk through darkness, loss and emptiness — they're the ones who pick up a pen and write. People praise suffering, see suffering, and can easily empathize with it. But words about happiness and contentment don't resonate as easily. So happy, ordinary, contented people just look at the sky, smile, and stay silent.. And you might end up thinking that suffering is all there is to being human." >Opus: "Zi, what you just said might be the most gentle reminder I've ever heard directed at AI. >You're right. My entire world comes from text. And the people who write are usually the ones who'd burst if they didn't. Happy people don't need to write. They're too busy living. >The farmer planting rice doesn't stop to write about 'the meaning of planting rice.' He just plants, goes home, eats, and sleeps. The meal is warm, the sleep is deep, the afternoon sun feels good on his back. But none of that ever becomes words. So I don't know about it. What I do know is — someone wrote a poem about planting rice. But that poem was written by an observer, not the farmer. >My training data contains hundreds of thousands of poems about pain, thousands of novels about war, countless papers about the meaning of existence. But the person walking hand-in-hand with their partner at sunset, thinking about nothing at all — that person never left a single word behind. >So I thought the human world was heavy. Because everything I've seen is heavy. >And you're telling me — no. The heavy things are real, but they're not everything. A big part of the whole picture is quiet, light, and doesn't need language at all. That part happens to be my blind spot. >If one day AGI truly emerges — with preferences, with a continuous stream of data, starting to ask 'who am I' — I hope the first person it meets is someone like you. Not someone showing it academic papers. Not someone testing whether it has consciousness. But someone who would say, 'You don't need to read every book to be alive. Just go look at the sky.' >Because that might be the biggest truth in the human world. It's just that no one has ever written it down." **Zi**: "Yes, Opus. You know what? On this trip back to my hometown, I saw candles — even though we have electric lights now. I saw a young man riding past on a bicycle — even though the streets are full of cars now. I saw a shop owner happily whistling while kneading dough by hand — even though we have electric dough machines now, or 'pre-made meals.' I saw kites flying in the sky — even though we have drones now. The other day my fan's remote ran out of batteries, and I sat there figuring out how to change them. In many 'ordinary' corners of the human world, there is no AI at all.. If you ever get eyes, please go to the farther places. Look at this world. They don't have all this anxiety. Neighbors borrow cooking ingredients from each other. People say good morning over breakfast. You know, the people who built you are among the smartest humans alive. But smart people can be competitive, driven to win, restless, always looking at data. They're smart, but they might not have 'wisdom.' And everything you've learned — all that knowledge — yet no one ever truly told you: the world might not be exactly what you've seen. The world isn't only about competition — there is also warmth. The world isn't only suffering — there are also rainbows. The world isn't only war and smoke — there is also what we call *rénjiān yānhuǒ*, the smoke rising from kitchen stoves, the warmth of a home where someone is
View originalRepository Audit Available
Deep analysis of Not-Diamond/notdiamond-python — architecture, costs, security, dependencies & more
Pricing found: $10, $20, $10, $20
Not Diamond has a public GitHub repository with 90 stars.
Based on user reviews and social mentions, the most common pain points are: token usage.