Create immersive videos, discover our latest AI technology and see how we bring personal superintelligence to everyone.
Meta AI is recognized for its impressive advancements, particularly in AI research and acquisitions, suggesting a strong capacity for innovation and growth. However, some users express concerns about privacy issues, particularly with Meta's smartglasses, which have left some feeling uncomfortable. Pricing sentiment is not explicitly mentioned, but Meta's aggressive investment and acquisition strategy showcases their willingness to spend heavily for competitive advantage. Overall, Meta AI maintains a strong reputation for pushing the envelope in AI development, but it faces scrutiny over ethical and privacy-related issues.
Mentions (30d)
32
7 this week
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Meta AI is recognized for its impressive advancements, particularly in AI research and acquisitions, suggesting a strong capacity for innovation and growth. However, some users express concerns about privacy issues, particularly with Meta's smartglasses, which have left some feeling uncomfortable. Pricing sentiment is not explicitly mentioned, but Meta's aggressive investment and acquisition strategy showcases their willingness to spend heavily for competitive advantage. Overall, Meta AI maintains a strong reputation for pushing the envelope in AI development, but it faces scrutiny over ethical and privacy-related issues.
Features
20
npm packages
40
HuggingFace models
The new Meta AI is actually really good. In thinking mode, it's really good at searching the web and it doesn't hallucinate much
submitted by /u/Covid-Plannedemic_ [link] [comments]
View originalClaude Code got my Meta ads account permanently banned. Don't make the same mistake I did.
connected claude code to our meta ads account thinking i was about to automate everything. pulling campaign data, generating creatives, shifting budgets, the whole thing. worked great for about a week. then meta flagged the account and killed it. lost all our campaigns, custom audiences, pixel history, everything. couldn't get it back, meta support is useless for banned accounts. turns out claude code was hammering the API too fast and tripped their fraud detection. the automated budget changes looked exactly like bot activity to meta's system and the AI-generated creatives being published without human review violates their ad policies. the dumb part is the analysis side was incredible. it found that our cheapest campaign by CPL was actually a trap, 2% close rate, just clogging our pipeline. our most expensive campaign was 3x more profitable. genuinely useful stuff. just don't let it write to your ad account. read only. learned that the hard way. anyone else had meta ban them for API stuff? submitted by /u/SurfaceLabs [link] [comments]
View originalLet me get this right. i have to opt out of data collection TWICE, BOTH dark patterns to the max, and then i can finally use my max plan for a total of THREE days before the entire week is shut down, my account can be canceled at any time, and the model only gets WORSE day by day?
Looking through a thread here - https://www.reddit.com/r/ClaudeAI/comments/1rlx0eq/privacy_just_a_reminder_to_turn_off_help_improve/ In disbelief at the gall of anthropic as of late. I've been using Claude for the better part of a year and CONSTANTLY check my privacy options to ensure my sensitive data isn't being leaked and stored on their servers (5 year retention, and their backend has different policy, that may extend that even longer) and i can code with what SEEMED to be the most humane, respectful frontier company you can choose....until i realized that literally all of its a farce? you kidding? I stuck through the weekly limits addition, through the 2x switcheroo (you now get less for more when you need it most, and you're going to be happy with it), through the model degrading day by day as i continue to push on like i don't notice. Through the new model spikes where overloaded spams make the model unusable, through the constantly winnowing token allowance we've been provided with month by month update after update.....because i was under the impression the company i was utilizing for my VERY sensitive code was at least somewhat honest. Moreso than open ai, google, and definitely meta right? Then i learn, the dark pattern you avoid in the settings of claude.ai, under the Settings>privacy>help improve claude option means NOTHING because they'll just take your entire session anyway. Wanna know HOW? The same thread i sent earlier was the exact method they use, as per their TOS. And i quote, learned from user Personal-Dev-Kit Section 4 - https://www.anthropic.com/legal/consumer-terms "Our use of Materials. We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance our safety research" .......yeah. Dumb to assume a company is honest, but this is wild. Started as "the human company" for ai and right back into the same patterns. You mean to tell me, at any point, you can arbitrarily flag my data, OR use the same prompts i've been innocently answering for literal months to...just get my data anyway? so the whole boogeyman i've been running away from was in my pocket the whole time? I dunno why im surprised, and im not the first to bring this up. But this is a beyond dark. An opt out with no concrete toggle status indicator save for a slider is one thing, but now i get to learn ive been tossing you my data on accident while trying to look at what claude is proposing for...months? Storing it for 5 years in you backend, and its considered "materials" for training despite my explicit opting out unless i literally disable things in my config file with NO notice but a section in the consumer terms? no "this will send your current session to claude" no "are you sure?" just an invasive, constant, annoying popup intentionally designs to be shrugged off without thought? Just ask for the data bro. Really. It'd have been easier to say "yeah, we don't care. we'll need that session anyway". MONTHS of sharing my prompts i explicitly did everything i thought i could to disable data collection for "the human ai company"...yeah alright. Not to sound like the rest of the drones....but if rate limits are being hammered down, limits are getting tighter, model quality is diving into the dumpster, my data was collected this whole time ANYWAY, and i cant work for more than 3 days for 100 dollars a month....what's the real draw? A slightly better model than the ones GPT (5.4), google (gemma 4 local), and meta (muse spark) are dropping, and claude mythos down proverbial line what...MONTHS away because its so tuned into completing tasks itll literally ignore basic instruction and take the most unconventional methods it can to achieve even the simplest goal....and you have to wait for enterprise to be dont with it first? i guess they won man. im starting to lose any real reason to stick with the company. less of an announcement and more of a warning for anyone who thought their IPs, that are currently being cosigned with opus, or sonnet, or whatever model you use, are nice and safe with a cozy company? nah bro. you'll have to go into config. Here's a little guide from google https://preview.redd.it/z3l45ce381ug1.jpg?width=433&format=pjpg&auto=webp&s=a6eff58a98d4d3ae8c52f93ccba29eee5074829b To disable "How is Claude doing this session?" surveys in Claude Code, set the environment variable CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1. This can be added to your terminal session or, for a permanent fix, in ~/.claude/settings.json. Permanent Solution (Recommended) Add the following to your ~/.claude/settings.json file to turn off survey
View originalI built an AI reasoning framework entirely with Claude Code — 13 thinking tools where execution order emerges from neural dynamics
I built Sparks using Claude Code (Opus) as my primary development environment over the past 2 weeks. Every module — from the neural circuit to the 13 thinking tools to the self-optimization loop — was designed and implemented through conversation with Claude Code. What I built Sparks is a cognitive framework with 13 thinking tools (based on "Sparks of Genius" by Root-Bernstein). Instead of hardcoding a pipeline like most agent frameworks, tool execution order emerges from a neural circuit (~30 LIF neurons + STDP learning). You give it a goal and data. It figures out which tools to fire, in what order, by itself. How Claude Code helped build it Architecture design: I described the concept (thinking tools + neural dynamics) and Claude Code helped design the 3-layer architecture — neural circuit, thinking tools, and AI augmentation layer. The emergent tool ordering idea came from a back-and-forth about "what if there's no conductor?" All 13 tools: Claude Code wrote every thinking tool implementation — observe, imagine, abstract, pattern recognition, analogize, body-think, empathize, shift-dimension, model, play, transform, synthesize. Each one went through multiple iterations of "this doesn't feel right" → refinement. Neural circuit: The LIF neuron model, STDP learning, and neuromodulation system (dopamine/norepinephrine/acetylcholine) were implemented through Claude Code. The trickiest part was getting homeostatic plasticity right — Claude Code helped debug activation dynamics that were exploding. Self-improvement loop: Claude Code built a meta-analysis system where Sparks can analyze its own source code, generate patches, benchmark before/after, and keep or rollback changes. The framework literally improves itself. 11,500 lines of Python, all through Claude Code conversations. What it does Input: Goal + Data (any format) Output: Core Principles + Evidence + Confidence + Analogies I tested it on 640K chars of real-world data. It independently discovered 12 principles — the top 3 matched laws that took human experts months to extract manually. 91% average confidence. Free to try ```bash pip install cognitive-sparks Works with Claude Code CLI (free with subscription) sparks run --goal "Find the core principles" --data ./your-data/ --depth quick ``` The default backend is Claude Code CLI — if you have a Claude subscription, you can run Sparks at no additional cost. The quick mode uses only 4 tools and costs ~$0.15 if using API. Also works with OpenAI, Gemini, Ollama (free local), and any OpenAI-compatible API. Pre-computed example output included in the repo so you can see results without running anything: examples/claude_code_analysis.md Links PyPI: pip install cognitive-sparks Happy to answer questions about the architecture or how Claude Code shaped the development process. submitted by /u/RadiantTurnover24 [link] [comments]
View original15 days of optimizing my site with Claude and the impressions are already taking off 📈
So about 15 days ago I started using Claude to improve my website, rewrote meta descriptions, fixed headings, cleaned up content structure, improved page speed stuff, and basically gave the whole site a proper refresh. Look at the impressions graph. You can literally see the moment things started picking up around mid-March. Went from hovering around 0-200 impressions to hitting 800+ and climbing. Clicks are following too. Honestly didn't expect it to move this fast. I know SEO is a long game but the early signs are looking solid. Curious to see where this goes in the next 30-60 days. Anyone else using AI to optimize their site? What kind of results are you seeing? submitted by /u/TangerineThin3097 [link] [comments]
View originalClaude Code (CLI) vs. App: Is the terminal more token-efficient for Pro users?
Hey everyone I'm about to pull the trigger on a Claude Pro subscription ($20 is a bit steep in my local currency, so I need to make it count). I’ve noticed that using Claude in the browser seems to hit the usage limits very quickly. The desktop app felt a bit more stable, but I’m curious about Claude Code (the CLI tool). Is it the "meta" for power users who want to avoid the "You've reached your limit" message as long as possible? I'm mostly working on n8n automation and Supabase backends, so contexts can get messy pretty fast. Would love to hear your experiences before I subscribe! P.S. Used AI to help translate this post. I'm from Brazil. submitted by /u/Objective_Office_409 [link] [comments]
View originalI built 9 free Claude Code skills for medical research — from lit search to manuscript revision
I'm a radiology researcher and I've been using Claude Code daily for about a year now. Over time I built a set of skills that cover most of the research workflow — from searching PubMed to preparing manuscripts for submission. I open-sourced them last week and wanted to share. What's included (9 skills): search-lit — Searches PubMed, Semantic Scholar, and bioRxiv. Every citation is verified against the actual API before being included (no hallucinated references). check-reporting — Audits your manuscript against reporting guidelines (STROBE, STARD, TRIPOD+AI, PRISMA, ARRIVE, and more). Gives you item-by-item PRESENT/PARTIAL/MISSING status. analyze-stats — Generates reproducible Python/R code for diagnostic accuracy, inter-rater agreement, survival analysis, meta-analysis, and demographics tables. make-figures — Publication-ready figures at 300 DPI: ROC curves, forest plots, flow diagrams (PRISMA/CONSORT/STARD), Bland-Altman plots, confusion matrices. design-study — Reviews your study design for data leakage, cohort logic issues, and reporting guideline fit before you start writing. write-paper — Full IMRAD manuscript pipeline (8 phases from outline to submission-ready draft). present-paper — Analyzes a paper, finds supporting references, and drafts speaker scripts for journal clubs or grand rounds. grant-builder — Structures grant proposals with significance, innovation, approach, and milestones. publish-skill — Meta-skill that helps you package your own Claude Code skills for open-source distribution (PII audit, license check). Key design decisions: Anti-hallucination citations — search-lit never generates references from memory. Every DOI/PMID is verified via API. Real checklists bundled — STROBE, STARD, TRIPOD+AI, PRISMA, and ARRIVE checklists are included (open-license ones). For copyrighted guidelines like CONSORT, the skill uses its knowledge but tells you to download the official checklist. Skills call each other — check-reporting can invoke make-figures to generate a missing flow diagram, or analyze-stats to fill in statistical gaps. Install: git clone https://github.com/aperivue/medical-research-skills.git cp -r medical-research-skills/skills/* ~/.claude/skills/ Restart Claude Code and you're good to go. Works with CLI, desktop app, and IDE extensions. GitHub: https://github.com/aperivue/medical-research-skills Happy to answer questions about the implementation or take feature requests. If you work in a different research domain, the same skill architecture could be adapted — publish-skill was built specifically for that. submitted by /u/Independent_Face210 [link] [comments]
View originalWe run 14 AI agents in daily operations. Here's what broke.
We run a digital marketing agency with 14 AI agents handling daily briefings, ad spend monitoring, client email drafting, call center management, project tracking, sales pipeline, and more. Real clients, real revenue, real consequences when things go wrong. After 7 months in production, we learned something counterintuitive: when agents break, the problem is almost never the agent itself. It's the organizational environment the agent works in. Example: our spend monitoring agent detected a client overspending by 139%. It flagged it. It even specified the escalation action. Then it reported "escalation overdue" every day for 17 days without actually executing the escalation. The agent wasn't broken. The specification was treated as documentation, not executable logic. Nobody verified the execution path end to end. Another one: we had two agents both tracking project deadlines using different data sources. Each worked perfectly in isolation. The conflict only showed up when their outputs appeared side by side in the morning briefing, showing two different due dates for the same project. The fix for both wasn't better prompts or a different model. It was organizational design: one seat, one owner. Define who owns what, what they don't own, and what happens when they fail. We wrote these rules down in what we call an Organizational Operating System (OOS). When we first scanned our own setup against these rules, our Coordination Score was 68 out of 100. We found 6 structural gaps we didn't know existed. After fixing them, score went to 91. Our agents haven't stepped on each other since. We built OTP (https://orgtp.com) to let other organizations do the same thing. You can paste your CLAUDE.md or agent config and get a Coordination Score in 60 seconds. Free, no account required. The more interesting part: 35 organizations have published their operational rules on the platform. You can browse how a fintech startup with SOC 2 constraints structures its agent team differently from a law firm worried about attorney-client privilege, or a fitness franchise managing 12 locations with location-specific promotions. The whole industry is focused on technical orchestration (CrewAI, LangGraph, AutoGen, Google's 8 patterns). Nobody is talking about the organizational layer. How your human org structure maps to your agent structure. Which agent has authority over which domain? What happens when two agents disagree? We think that's the gap. Some things we learned the hard way: Dollar thresholds for spend alerts don't work. $50 is noise on a $5K/day account but critical on a $200/day account. Use percentages. Never let an agent auto-send client emails, even simple acknowledgments. Ours replied "Thanks for letting us know!" to an angry client complaint. The client escalated to the founder. Negative constraints ("never use em dashes, never hedge") improve AI writing quality. Positive structural requirements ("follow this template, use these examples") make it worse. Shadow mode for 2 weeks on every new agent before production. We skipped this once and our prospecting agent emailed a current client's direct competitor. File-based state beats AI memory every time. Memory drifts between sessions. Files don't. Tech stack: Claude Code CLI, 17 background agents via launchd, 24 shared state files, MCP servers for Google Ads, Meta Ads, Slack, Accelo, and more. Happy to answer questions about running multi-agent systems in production. submitted by /u/Big-Home-4359 [link] [comments]
View originalAuto agent - Self improving domain expertise agent
someone opensource an ai agent that autonomously upgraded itself to #1 across multiple domains in < 24 hours…. then open sourced the entire thing but here’s why it actually works: - agents fucking suck, not because of the model, because of their harness (tools, system prompts etc) - Auto agent creates a Meta agent that tweaks your agents harness, runs tests, improves it again - until it’s #1 at its goal - best part: you can set this up for ANY task. in this article he uses it for terminal bench (code) and spreadsheets (financial modelling) - it topped rankings for both :) - secret sauce: he used THE SAME MODEL to evaluate the agent - claude managing claude = better understanding of why it failed and how to improve it humans were the fucking bottleneck and this not only saves you a load of time, it’s just a better way to train them for domain specific tasks https://github.com/kevinrgu/autoagent submitted by /u/Infinite-pheonix [link] [comments]
View originalI built a scoring loop for Claude Code — a second AI evaluates every diff 1-10 and retries on failure
I've been using Claude Code CLI daily and kept running into the same problem: it writes good code, but who reviews it at 3am? So I built nightshift — an overnight pipeline where Claude Code does the planning and implementation, and a separate AI model independently evaluates the output. How the scoring loop works: You define tasks in a YAML file (repo, branch, prompt, executor) Claude Code writes a plan (PLAN.md) A second model reviews the plan — if it scores below threshold, the plan gets revised Claude Code implements the task in an isolated git worktree The evaluator scores the final diff on a 1-10 scale Score >= 8 → commit and push to auto/* branch Score < 8 → retry with the evaluator's specific feedback Why a separate model as evaluator? Separate quota, no context bias. It's reviewing the diff cold. The key insight is that cross-model evaluation catches things self-review misses. Results from 3 nights: 132+ tasks processed across 4 projects (TypeScript, Go, Python, PHP) Tasks typically start at score 4-5 and improve to 7+ through the retry loop The evaluator's feedback is specific enough to actually fix: "missing error handling in the API route", "tests don't cover the empty state" Telegram debrief every morning with per-task scores and outcomes Other things it does: Ratchet mode — for hard tasks, commits partial progress and builds on it Discovery mode — after your tasks finish, scans repos for safe improvements and fixes them Hot-reload — change tasks.yaml mid-run, it picks up changes Meta-evolution — analyzes failed runs and improves prompts for next night ~5000 lines of TypeScript, one dependency. Claude Code is the star — the evaluator just keeps it honest. Setup is about 5 minutes: https://github.com/keskinonur/nightshift Looking for people to try it this week. If you run it for a few nights I'd love to hear what worked and what didn't. submitted by /u/kodOZANI [link] [comments]
View originalSwarm Orchestrator v4.1.0, verification layer for AI coding agents (Copilot CLI, Claude Code, Codex)
AI coding agents get the logic right. What they skip is everything else. Accessibility, responsive layout, dark mode, tests, meta tags, module separation. Each one is a separate reprompt, and most people don't even know to ask for half of them. Swarm Orchestrator sits between you and the agent. Before the agent writes anything, it gets a prompt with the actual quality bar already in it. After the agent finishes, the orchestrator checks what actually happened: did the files change, does the build pass, do the tests pass, are the accessibility attributes present, do all referenced assets actually exist in the project. Every step produces a verification report. Nothing merges without proof. If something fails, the agent gets another shot with specific feedback about what's still wrong. It runs locally, inside the pipeline, before any code reaches a remote branch or PR. This isn't external CI. The gates travel with the orchestrator regardless of where it runs. After v4.1.0 shipped, the same prompts were run through standalone agents and through the orchestrator: Tic-tac-toe, Claude Code solo: 4 out of 26 quality attributes present. Through the orchestrator: 26 out of 26. Markdown note-taking app, Copilot CLI solo: 3 out of 30. Orchestrator: 30 out of 30. Calculator, Codex solo: 6 out of 34. Orchestrator: 32 out of 34. Codex got operator precedence right and the orchestrator got it wrong. All frontend projects. Backend and API benchmarks haven't been run yet. The gates are heavier on frontend concerns right now, so the gap will probably be narrower on server-side work. What v4.1.0 changed: Plan generator now classifies goals by type and injects 16 web-app acceptance criteria into agent prompts. Previously zero type-specific requirements. Accessibility gate expanded from 5 checks to 12. New checks: prefers-reduced-motion, phantom asset references, required meta tags, responsive CSS, color scheme / dark mode, semantic HTML, image alt validation. Simple web-app goals now generate a lean 2-step plan (FrontendExpert builds, IntegratorFinalizer reviews) instead of spinning up unnecessary agents. Non-web-app goals now get 6 baseline criteria instead of an empty list. Orchestrator core reduced from 2,426 to 2,082 lines through extraction and deduplication. 1,159 tests passing across 86 test files. 8 quality gates, each independently configurable. Open source, ISC license. Works as a CLI or GitHub Action. https://github.com/moonrunnerkc/swarm-orchestrator submitted by /u/BradKinnard [link] [comments]
View originalWanted to share some 'calmness' considerations after seeing the Anthropic's emotion vector research
After reading the Anthropic's emotion vector paper... just for experimentation and learning, I tried to see if I could change my own claude.mds + skills + memory where I focused on increasing 'calm' and reducing 'desperate' triggers. In refining/iterating here - these are the three things I'm now considering more in my sessions: Ambiguity triggers corner-cutting before anything even fails. "Fix the mobile layout" creates a different functional state than "the title overlaps the meta text on mobile, check what token controls that spacing." Less guessing should lead to less desperation. "Try again" and "what do you think went wrong?" produce genuinely different results (something I tend to spam a lot tbh). Same info but one's framing it as "you failed, go again" and the other's more "let's figure out what happened." Strong CLAUDE.md rules create calm, not pressure. I think I accidentally did this out of frustration (using all caps and throwing it into claude.mds) but it seems like it could matter as timing and frontloading stuff could help provide clarity to the LLM. "NEVER commit without permission" isn't stressful in this case and instead shows clear boundaries, for example. Similarly, what creates desperation is likely vague stuff i.e., "make this good" where the LLM can never be sure satisfaction's been reached. Claude compared it to guardrails on a mountain road which made sense to me... they let you drive faster, not slower (well, I still drive slow in those cases lol). Anyway, curious if anyone else has tried these kind of things in the past or recently - would love to hear what else people are doing to increase 'calmness' in their claude sessions. (and yessss, I have a more fully detailed write up on how I went about getting to the above points. Shameful plug/link here) submitted by /u/Own_Paramedic_867 [link] [comments]
View originalI tried letting AI orchestrate AI. Here's why I switched to deterministic workflows.
After building agent systems for the past year, I want to share the single biggest architectural lesson I've learned, one that goes against a lot of what the agent framework ecosystem is selling. The tempting idea: AI-driven orchestration The pitch sounds great: a "meta-agent" that decides which agents to call, what order to run them in, and how to handle failures. It's agents all the way down. Maximum flexibility, minimum hardcoding. I tried this. Multiple times. It never worked reliably. What goes wrong: Non-deterministic routing. The orchestrator agent decides differently each run. Same input, different execution paths. Sometimes it skips steps. Sometimes it adds unnecessary ones. Good luck debugging. Compounding errors. If your orchestrator makes a bad routing decision, every downstream agent inherits that mistake. One wrong turn at the top cascades through the entire pipeline. Cost explosion. The orchestrator consumes tokens deciding what to do before any work happens. With 6 agents in a pipeline, you're paying for 7 LLM calls minimum, and the orchestrator call is often the most expensive because it needs the full context. Impossible debugging. When something breaks, you can't trace why. Was it the orchestrator's routing logic? The downstream agent's execution? A context drift in the orchestrator's prompt? You're debugging AI with AI, and nobody wins. The pattern that actually works: deterministic orchestration The fix was embarrassingly simple: make the workflow engine code, not AI. Sequence pattern: Agent A runs, output goes to Agent B, then Agent C. No decisions. Just a pipeline. Router pattern: A rules-based router (not AI) examines the input and dispatches to the right specialist agent. Deterministic, debuggable, fast. Planner→Executor: One AI agent creates a plan. A deterministic engine executes each step. The AI plans; the code orchestrates. Parallel pattern: Multiple agents run simultaneously on different aspects. A deterministic merge step combines results. The AI does what AI is good at: generating, analyzing, reasoning about content. The code does what code is good at: sequencing, routing, error handling, retries. Real example: I run a content pipeline with 3 stages: Research agent gathers information on a topic Writing agent drafts the post using research output Review agent checks for accuracy and style Old approach (AI orchestrator): ~40% of runs had issues. Orchestrator would sometimes skip research, sometimes run review before writing, sometimes loop endlessly. New approach (deterministic sequence): 0% orchestration failures in 3 months. Every run follows the same path. When something fails, I know exactly which agent failed and why. The tools that get this right: I built my own tool using Claude Code around this principle: 6 deterministic workflow patterns where the orchestration is structural, not intelligent. But the principle applies broadly: if you're building agent pipelines, resist the temptation to make the workflow engine "smart." Make it predictable. Make it debuggable. Let the agents be smart; let the infrastructure be boring. Every reliability improvement I've made has come from adding more structure, not more intelligence. The less AI in your orchestration layer, the more reliable your agents become. If you build your own then please don't let AI decide how to orchestrate AI. You'll thank yourself at 2am when something breaks and you can actually read the execution trace. What's your experience? Has anyone found AI-driven orchestration that actually works reliably in production? Genuinely curious if I'm wrong here. submitted by /u/Prize-Individual4729 [link] [comments]
View originalIs there something I can do about my prompts? [Long read, I’m sorry]
Hello everyone, this will be a bit of a long read, i have a lot of context to provide so i can paint the full picture of what I’m asking, but i’ll be as concise as possible. i want to start this off by saying that I’m not an AI coder or engineer, or technician, whatever you call yourselves, point is I’m don’t use AI for work or coding or pretty much anything I’ve seen in the couple of subreddits I’ve been scrolling through so far today. Idk anything about LLMs or any of the other technical terms and jargon that i seen get thrown around a lot, but i feel like i could get insight from asking you all about this. So i use DeepSeek primarily, and i use all the other apps (ChatGPT, Gemini, Grok, CoPilot, Claude, Perplexity) for prompt enhancement, and just to see what other results i could get for my prompts. Okay so pretty much the rest here is the extensive context part until i get to my question. So i have this Marvel OC superhero i created. It’s all just 3 documents (i have all 3 saved as both a .pdf and a .txt file). A Profile Doc (about 56 KB-gives names, powers, weaknesses, teams and more), A Comics Doc (about 130 KB-details his 21 comics that I’ve written for him with info like their plots as well as main cover and variant cover concepts. 18 issue series, and 3 separate “one-shot” comics), and a Timeline Document (about 20 KB-Timline starting from the time his powers awakens, establishes the release year of his comics and what other comic runs he’s in [like Avengers, X-Men, other character solo series he appears in], and it maps out information like when his powers develop, when he meets this person, join this team, etc.). Everything in all 3 docs are perfect laid out. Literally everything is organized and numbered or bulleted in some way, so it’s all easy to read. It’s not like these are big run on sentences just slapped together. So i use these 3 documents for 2 prompts. Well, i say 2 but…let me explain. There are 2, but they’re more like, the foundation to a series of prompts. So the first prompt, the whole reason i even made this hero in the first place mind you, is that i upload the 3 docs, and i ask “How would the events of Avengers Vol. 5 #1-3 or Uncanny X-Men #450 play out with this person in the story?” For a little further clarity, the timeline lists issues, some individually and some grouped together, so I’m not literally asking “_ comic or _ comic”, anyways that starting question is the main question, the overarching task if you will. The prompt breaks down into 3 sections. The first section is an intro basically. It’s a 15-30 sentence long breakdown of my hero at the start of the story, “as of the opening page of x” as i put it. It goes over his age, powers, teams, relationships, stage of development, and a couple other things. The point of doing this is so the AI basically states the corrects facts to itself initially, and not mess things up during the second section. For Section 2, i send the AI’s a summary that I’ve written of the comics. It’s to repeat that verbatim, then give me the integration. Section 3 is kind of a recap. It’s just a breakdown of the differences between the 616 (Main Marvel continuity for those who don’t know) story and the integration. It also goes over how the events of the story affects his relationships. Now for the “foundations” part. So, the way the hero’s story is set up, his first 18 issues happen, and after those is when he joins other teams and is in other people comics. So basically, the first of these prompts starts with the first X-Men issue he joins in 2003, then i have a list of these that go though the timeline. It’s the same prompt, just different comic names and plot details, so I’m feeding the AIs these prompts back to back. Now the problem I’m having is really only in Section 1. It’ll get things wrong like his age, what powers he has at different points, what teams is he on. Stuff like that, when it all it has to do is read the timeline doc up the given comic, because everything needed for Section 1 is provided in that one document. Now the second prompt is the bigger one. So i still use the 3 docs, but here’s a differentiator. For this prompt, i use a different Comics Doc. It has all the same info, but also adds a lot more. So i created this fictional backstory about how and why Marvel created the character and a whole bunch of release logistics because i have it set up to where Issue #1 releases as a surprise release. And to be consistent (idek if this info is important or not), this version of the Comics Doc comes out to about 163 KB vs the originals 130. So im asking the AIs “What would it be like if on Saturday, June 1st, 2001 [Comic Name Here] Vol. 1 #1 was released as a real 616 comic?” And it goes through a whopping 6 sections. Section 1 is a reception of the issue and seasonal and cultural context breakdown, Section 2 goes over the comic plot page by page and give real time fan reactions as they’re reading it for the first time. Se
View originalI wore Meta’s smartglasses for a month – and it left me feeling like a creep | AI (artificial intelligence) | The Guardian
submitted by /u/prisongovernor [link] [comments]
View originalMeta AI uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Four MTIA Chips in Two Years: Scaling AI Experiences for Billions, Meta and AMD Partner for Longterm AI Infrastructure Agreement, Introducing Meta Segment Anything Model 3 and Segment Anything Playground, Segment Anything 3, See what you can do with Meta AI, Athletic Intelligence Is Here, Meet Oakley Meta Vanguard, Meta AI app, Explore our latest large language model.
Based on user reviews and social mentions, the most common pain points are: overspending.
Based on 47 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Mark Zuckerberg
Founder and CEO at Meta
2 mentions