深度求索(DeepSeek),成立于2023年,专注于研究世界领先的通用人工智能底层模型与技术,挑战人工智能前沿性难题。基于自研训练框架、自建智算集群和万卡算力等资源,深度求索团队仅用半年时间便已发布并开源多个百亿级参数大模型,如DeepSeek-LLM通用大语言模型、DeepSeek-Coder代
Based on the limited social mentions provided, DeepSeek Coder appears to be gaining attention in the AI coding space, with multiple YouTube videos discussing the tool. However, the mentions lack detailed user feedback about specific strengths, weaknesses, or pricing experiences. One Reddit post mentions it alongside other AI coding tools like Claude Code and Aider in the context of observability and monitoring solutions. Without substantial user reviews or detailed social discussions, it's difficult to assess overall user sentiment, though the YouTube coverage suggests growing interest in the tool's capabilities.
Mentions (30d)
1
Reviews
0
Platforms
2
GitHub Stars
22,960
2,747 forks
Based on the limited social mentions provided, DeepSeek Coder appears to be gaining attention in the AI coding space, with multiple YouTube videos discussing the tool. However, the mentions lack detailed user feedback about specific strengths, weaknesses, or pricing experiences. One Reddit post mentions it alongside other AI coding tools like Claude Code and Aider in the context of observability and monitoring solutions. Without substantial user reviews or detailed social discussions, it's difficult to assess overall user sentiment, though the YouTube coverage suggests growing interest in the tool's capabilities.
Industry
information technology & services
Employees
200
87,547
GitHub followers
32
GitHub repos
22,960
GitHub stars
20
npm packages
40
HuggingFace models
Is there something I can do about my prompts? [Long read, I’m sorry]
Hello everyone, this will be a bit of a long read, i have a lot of context to provide so i can paint the full picture of what I’m asking, but i’ll be as concise as possible. i want to start this off by saying that I’m not an AI coder or engineer, or technician, whatever you call yourselves, point is I’m don’t use AI for work or coding or pretty much anything I’ve seen in the couple of subreddits I’ve been scrolling through so far today. Idk anything about LLMs or any of the other technical terms and jargon that i seen get thrown around a lot, but i feel like i could get insight from asking you all about this. So i use DeepSeek primarily, and i use all the other apps (ChatGPT, Gemini, Grok, CoPilot, Claude, Perplexity) for prompt enhancement, and just to see what other results i could get for my prompts. Okay so pretty much the rest here is the extensive context part until i get to my question. So i have this Marvel OC superhero i created. It’s all just 3 documents (i have all 3 saved as both a .pdf and a .txt file). A Profile Doc (about 56 KB-gives names, powers, weaknesses, teams and more), A Comics Doc (about 130 KB-details his 21 comics that I’ve written for him with info like their plots as well as main cover and variant cover concepts. 18 issue series, and 3 separate “one-shot” comics), and a Timeline Document (about 20 KB-Timline starting from the time his powers awakens, establishes the release year of his comics and what other comic runs he’s in [like Avengers, X-Men, other character solo series he appears in], and it maps out information like when his powers develop, when he meets this person, join this team, etc.). Everything in all 3 docs are perfect laid out. Literally everything is organized and numbered or bulleted in some way, so it’s all easy to read. It’s not like these are big run on sentences just slapped together. So i use these 3 documents for 2 prompts. Well, i say 2 but…let me explain. There are 2, but they’re more like, the foundation to a series of prompts. So the first prompt, the whole reason i even made this hero in the first place mind you, is that i upload the 3 docs, and i ask “How would the events of Avengers Vol. 5 #1-3 or Uncanny X-Men #450 play out with this person in the story?” For a little further clarity, the timeline lists issues, some individually and some grouped together, so I’m not literally asking “_ comic or _ comic”, anyways that starting question is the main question, the overarching task if you will. The prompt breaks down into 3 sections. The first section is an intro basically. It’s a 15-30 sentence long breakdown of my hero at the start of the story, “as of the opening page of x” as i put it. It goes over his age, powers, teams, relationships, stage of development, and a couple other things. The point of doing this is so the AI basically states the corrects facts to itself initially, and not mess things up during the second section. For Section 2, i send the AI’s a summary that I’ve written of the comics. It’s to repeat that verbatim, then give me the integration. Section 3 is kind of a recap. It’s just a breakdown of the differences between the 616 (Main Marvel continuity for those who don’t know) story and the integration. It also goes over how the events of the story affects his relationships. Now for the “foundations” part. So, the way the hero’s story is set up, his first 18 issues happen, and after those is when he joins other teams and is in other people comics. So basically, the first of these prompts starts with the first X-Men issue he joins in 2003, then i have a list of these that go though the timeline. It’s the same prompt, just different comic names and plot details, so I’m feeding the AIs these prompts back to back. Now the problem I’m having is really only in Section 1. It’ll get things wrong like his age, what powers he has at different points, what teams is he on. Stuff like that, when it all it has to do is read the timeline doc up the given comic, because everything needed for Section 1 is provided in that one document. Now the second prompt is the bigger one. So i still use the 3 docs, but here’s a differentiator. For this prompt, i use a different Comics Doc. It has all the same info, but also adds a lot more. So i created this fictional backstory about how and why Marvel created the character and a whole bunch of release logistics because i have it set up to where Issue #1 releases as a surprise release. And to be consistent (idek if this info is important or not), this version of the Comics Doc comes out to about 163 KB vs the originals 130. So im asking the AIs “What would it be like if on Saturday, June 1st, 2001 [Comic Name Here] Vol. 1 #1 was released as a real 616 comic?” And it goes through a whopping 6 sections. Section 1 is a reception of the issue and seasonal and cultural context breakdown, Section 2 goes over the comic plot page by page and give real time fan reactions as they’re reading it for the first time. Se
View originalHawkeye - open-source flight recorder & guardrails for AI agents, with drift detection and mobile monitoring
I built with Caode-code an observability tool for AI coding agents that runs 100% locally. The problem: AI agents (Claude Code, Aider, AutoGPT...) can silently drift off-task, burn tokens, or touch sensitive files. You only notice when it's too late. Hawkeye records every action and evaluates drift in real-time: - Heuristic scorer (zero-cost, always on) — detects dangerous commands, suspicious paths, error loops, token burn without progress - LLM scorer (optional) — uses your local Ollama model (llama3.2, mistral, deepseek-coder, phi3...) to check if actions match the objective. No data leaves your machine - Guardrails — file protection, command blocking, cost limits, directory scoping, network restrictions - Auto-pause when drift goes critical - Web dashboard, session replay, MCP server for agent self-awareness One thing I'm stuck on: the cost/token tracking is unreliable. When agents like Claude Code don't expose token counts in their hooks, I'm left estimating from input/output text length. Anyone dealt with this? How do you track actual token usage across different agents/providers? No cloud dependency. SQLite storage. Everything stays local. npm install -g hawkeye-ai Npm: https://www.npmjs.com/package/hawkeye-ai?activeTab=readme GitHub: github.com/MLaminekane/hawkeye submitted by /u/Ok-Idea9032 [link] [comments]
View original🚨BREAKING: GPT-5.4 has the worst score on the SM-Bench among OpenAI’s models, ranking ahead only of GPT-5.2
submitted by /u/cloudinasty [link] [comments]
View originalRepository Audit Available
Deep analysis of deepseek-ai/DeepSeek-Coder — architecture, costs, security, dependencies & more
DeepSeek Coder has a public GitHub repository with 22,960 stars.
Based on user reviews and social mentions, the most common pain points are: token usage.