Explore our complete AI suite: Translate speech and text, integrate our API, or automate tasks with the new DeepL Agent.
I cannot provide a summary of user opinions about DeepL based on the provided content. The social mentions you've shared appear to be unrelated to DeepL (a translation software) and instead focus on various political topics, other AI companies, and unrelated news stories. To accurately summarize user sentiment about DeepL, I would need reviews and social mentions that specifically discuss the translation tool, its accuracy, pricing, features, and user experience.
Mentions (30d)
3
Reviews
0
Platforms
4
Sentiment
0%
0 positive
I cannot provide a summary of user opinions about DeepL based on the provided content. The social mentions you've shared appear to be unrelated to DeepL (a translation software) and instead focus on various political topics, other AI companies, and unrelated news stories. To accurately summarize user sentiment about DeepL, I would need reviews and social mentions that specifically discuss the translation tool, its accuracy, pricing, features, and user experience.
Features
Industry
information technology & services
Employees
1,600
Funding Stage
Venture (Round not Specified)
Total Funding
$410.0M
X Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix”
X Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix” Alan Macleod On January 30, the Department of Justice released its latest tranche of 3.5 million documents relating to Jeffrey Epstein. Years of emails, texts, and images were suddenly in the public domain. Epstein, a serial rapist, masterminded a global human trafficking and sexual abuse network, and could count princes, professors, and politicians among his closest friends and accomplices. MintPress News has been at the forefront of covering the Epstein saga, revealing his extremely close links to American and Israeli intelligence groups – a discovery that perhaps sheds light on why it took so long for the world’s most notorious pedophile to face accountability for his crimes. Many of the DOJ files have been heavily redacted in order to protect Epstein’s powerful clients. Still, they have exposed a massive elite nexus revolving around the New York billionaire, implicating presidents, diplomats, and plutocrats in his crimes, and imply that Epstein was significantly more powerful than first thought, shaping modern politics in ways never previously understood. With shocking new details emerging on a near-hourly basis, here are ten Epstein- related stories that have flown relatively under the radar. The Israeli Government Installed Surveillance Cameras at Epstein’s New York Apartment The Israeli government installed and maintained a hi-tech surveillance system at Epstein’s Manhattan apartment complex, including a network of alarms and cameras, emails show. Starting in 2016, the director of protective service at the Israeli mission to the United Nations controlled guests’ access to the Manhattan residence, and even performed background checks on prospective cleaners and other Epstein employees. Former Israeli prime minister Ehud Barak admitted visiting the apartment up to 100 times, and stayed there for long periods of time. While Barak’s security may have been a concern, Epstein is known to have housed underage girls at the apartment, and many of his worst sexual crimes and most sordid parties were held there, raising questions as to what sort of images and data the Israeli government had access to. Epstein Plotted War With Iran Ehud Barak became one of Epstein’s closest associates, staying for extended periods of time at the billionaire’s residences. The pair would email, text, call, and meet constantly. A search for “Ehud Barak” elicits more than 3500 results in the latest file dump alone. The pair would talk politics, and shared a vision of the United States attacking Iran. In 2013, with negotiations between the International Atomic Energy Agency and Iran stalling, Epstein emailed Barak stating, in typically poor spelling and grammar: “hopefully somone suggests getting authorization now for Iran. the congress woudl do it.” Epstein would get his wish in 2025, when his close associate Donald Trump began bombing the country. Noam Chomsky Considered Epstein His “Best Friend” Epstein arranged a meeting between Barak and renowned leftist academic (and vehement critic of the U.S. and Israel) Noam Chomsky. An unlikely friendship between the notorious pedophile and star professor blossomed, with the pair regularly meeting up at each other’s houses for dinner. Chomsky flew on Epstein’s “Lolita Express” jet to attend a dinner with Woody Allen in New York. He also expressed his desire to visit Little St. James Island, Epstein’s notorious Caribbean hideaway, and the center of his trafficking operation. Chomsky considered Epstein his “best friend” according to an email sent by his wife, Valeria. The usually curt and matter-of-fact academic signed off his emails to Epstein with unexpectedly flowery language, such as “Like real friendship, deep and sincere and everlasting from both of us, Noam and Valeria.” Chomsky strongly supported Epstein until his dying day in a Manhattan prison cell, taking it upon himself to act as his unofficial crisis manager, describing his accusers as “publicity seekers or cranks of all sorts,” and denouncing the media as a “culture of gossip-mongers” destroying his stellar character. “Ive watched the horrible way you are being treated in the press and public,” he wrote, advising Epstein on tactics to fight the supposed smears against him. For a full rundown of the Chomsky-Epstein relationship, see the MintPress News investigation: “The Chomsky-Epstein Files: Unravelling a Web of Connections Between a Star Leftist Academic and a Notorious Pedophile.” Steve Bannon Developed a Plan to Help Epstein “Crush the Pedo Narrative” A second public figure running defense for Epstein was Steve Bannon. In public, the far-right strategist claimed that he was working on a documentary exposing Epstein. In private messaging, however, Bannon, like Chomsky, was advising Epstein on how best to repair his image. Just weeks before Epstein’s arrest and subsequent death, Bannon was messaging him, devising a complex media strategy
View originalPencil Bench (multi step reasoning benchmark)
DeepSeek was a scam from the beginning submitted by /u/DigSignificant1419 [link] [comments]
View originalI build a risk toolkit for investment portfolio's
I've been investing though DeGiro and I was always frustrated about the lack of risk metrics. There is a P&L and that's it, I want to know how volatile my portfolio is, how well diversified I am against crashes and loads of other things. I discovered Claude Code and 2 weeks later I had built Drawdn, a risk dashboard for retail investors. Stress tests against real crashes, Monte Carlo simulations, portfolio optimizer, deep dive per holding, alerts. Next.js + Python risk engine, ~280 tests, Stripe billing. I'm pretty happy how it turned out and just launched early access today. If you invest and want to see what a crash would do to your portfolio: drawdn.com/crash-test , no signup, just enter your tickers. I'm making some costs on data and the risk calculations, but I have a good free tier for a solid risk analysis of your own portfolio. Would love feedback, and happy to talk about the workflow. submitted by /u/Hour-Associate-7628 [link] [comments]
View original[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.
The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an afterthought — English-first tokenizer, English-first data, maybe some Italian sprinkled in during fine-tuning. The result: bloated token counts, poor morphology handling, and models that "speak Italian" the way a tourist orders coffee in Rome. I decided to fix this from the ground up. What is Dante-2B A 2.1B parameter, decoder-only, dense transformer. Trained from scratch — no fine-tune of Llama, no adapter on Mistral. Random init to coherent Italian in 16 days on 2× H200 GPUs. Architecture: LLaMA-style with GQA (20 query heads, 4 KV heads — 5:1 ratio) SwiGLU FFN, RMSNorm, RoPE d_model=2560, 28 layers, d_head=128 (optimized for Flash Attention on H200) Weight-tied embeddings, no MoE — all 2.1B params active per token Custom 64K BPE tokenizer built specifically for Italian + English + code Why the tokenizer matters This is where most multilingual models silently fail. Standard English-centric tokenizers split l'intelligenza into l, ', intelligenza — 3 tokens for what any Italian speaker sees as 1.5 words. Multiply that across an entire document and you're wasting 20-30% of your context window on tokenizer overhead. Dante's tokenizer was trained on a character-balanced mix (~42% Italian, ~36% English, ~22% code) with a custom pre-tokenization regex that keeps Italian apostrophe contractions intact. Accented characters (à, è, é, ì, ò, ù) are pre-merged as atomic units — they're always single tokens, not two bytes glued together by luck. Small detail, massive impact on efficiency and quality for Italian text. Training setup Data: ~300B token corpus. Italian web text (FineWeb-2 IT), English educational content (FineWeb-Edu), Italian public domain literature (171K books), legal/parliamentary texts (Gazzetta Ufficiale, EuroParl), Wikipedia in both languages, and StarCoderData for code. Everything pre-tokenized into uint16 binary with quality tiers. Phase 1 (just completed): 100B tokens at seq_len 2048. DeepSpeed ZeRO-2, torch.compile with reduce-overhead, FP8 via torchao. Cosine LR schedule 3e-4 → 3e-5 with 2000-step warmup. ~16 days, rock solid — no NaN events, no OOM, consistent 28% MFU. Phase 2 (in progress): Extending to 4096 context with 20B more tokens at reduced LR. Should take ~4-7 more days. What it can do right now After Phase 1 the model already generates coherent Italian text — proper grammar, correct use of articles, reasonable topic continuity. It's a 2B, so don't expect GPT-4 reasoning. But for a model this size, trained natively on Italian, the fluency is already beyond what I've seen from Italian fine-tunes of English models at similar scale. I'll share samples after Phase 2, when the model has full 4K context. What's next Phase 2 completion (est. ~1 week) HuggingFace release of the base model — weights, tokenizer, config, full model card SFT phase for instruction following (Phase 3) Community benchmarks — I want to test against Italian fine-tunes of Llama/Gemma/Qwen at similar sizes Why I'm posting now I want to know what you'd actually find useful. A few questions for the community: Anyone working with Italian NLP? I'd love to know what benchmarks or tasks matter most to you. What eval suite would you want to see? I'm planning perplexity on held-out Italian text + standard benchmarks, but if there's a specific Italian eval set I should include, let me know. Interest in the tokenizer alone? The Italian-aware 64K BPE tokenizer might be useful even independently of the model — should I release it separately? Training logs / loss curves? Happy to share the full training story with all the numbers if there's interest. About me I'm a researcher and entrepreneur based in Rome. PhD in Computer Engineering, I teach AI and emerging tech at LUISS university, and I run an innovation company (LEAF) that brings emerging technologies to businesses. Dante-2B started as a research project to prove that you don't need a massive cluster to train a decent model from scratch — you need good data, a clean architecture, and patience. Everything will be open-sourced. The whole pipeline — from corpus download to tokenizer training to pretraining scripts — will be on GitHub. Happy to answer any questions. 🇮🇹 Discussion also on r/LocalLLaMA here submitted by /u/angeletti89 [link] [comments]
View originalMajor drifting on Claude.ai web platform
Using Claude Sonnet v4.6 It had been amazing for weeks now and not one time has it been down or if it did go down less than 30 minutes and it was up again. But the last few days Claude keeps drifting so everything he is coding is not getting onto the actual artifacts. This is every single Claude chat now submitted by /u/Ok-Communication8549 [link] [comments]
View originalThe job search grind was killing me so I built AI agents to do it
Dashboard Preview I'm a CPA, not a developer. I'm looking for a job at the intersection of AI and finance, and the process of searching for openings, doing company research, and tailoring my CV is such a massive time sink. So I automated it. 1-minute demo: https://youtu.be/L-8e5EkNv1w Repo: https://github.com/muggl3mind/career-manager This is NOT a resume auto-submitter or some kind of precursor to a SaaS product. I built it for myself, but it's saving me so much time I thought others might get some value out of it. The whole thing was built with Claude Code. You paste 1 prompt into Claude Code and it asks for your resume, then kicks off a bunch of subagents to do the research, and drops you into a dashboard for review. It can: - Discover and score companies against your job niche - Generate deep company research (financials, leadership, culture signals) - Tailor your CV for a specific role - Track applications and flag follow-ups - Surface direct points of contact at the company Happy to answer questions about the build or how the subagent orchestration works. submitted by /u/Novel-Associate-9799 [link] [comments]
View originalKarpathy just said "the human is the bottleneck" and "once agents fail, you blame yourself" — I built a system that fixes both problems
In the No Priors podcast posted 3 days ago, Karpathy described a feeling I know too well: He's spending 16 hours a day "expressing intent to agents," running parallel sessions, optimizing agents.md files — and still feeling like he's not keeping up. I've been in that exact loop. But I think the real problem isn't what Karpathy described. The real problem is one layer deeper: you stop understanding what your agents are doing, but everything keeps working — until it doesn't. Here's what happened to me: I was building an AI coding team with Claude Code. I approved architecture proposals I didn't understand. I pressed Enter on outputs I couldn't evaluate. Tests passed, so I assumed everything was fine. Then I gave the agent a direction that contradicted its own architecture — because I didn't know the architecture. We spent days on rework. I wasn't lazy. I was structurally unable to judge my agents' output. And no amount of "running more agents in parallel" fixes that. The problem no one is solving I surveyed the top 20 AI coding projects on star-history in March 2026 — GStack (Garry Tan's project, 16k+ stars), agency-agents, OpenCrew, OpenClaw, etc. Every single one stops at the same layer: they give you a powerful agent team, then assume you know who to call, when to call them, and how to evaluate their output. You're still the dispatcher. You went from manually prompting one agent to manually dispatching six. The cognitive load didn't decrease — it shifted. I mapped out 6 layers of what I call "decision caching" in AI-assisted development: Layer What gets cached You no longer need to... 0. Raw Prompt Nothing — 1. Skill Single task execution Prompt step by step 2. Pipeline Task dependencies Manually orchestrate skills 3. Agent Runtime decisions Choose which path to take 4. Agent Team Specialization Decide who does what 5. Secretary User intent Know who to call or how + Education Understanding Worry about falling behind Every project I found stops at Layer 4. Nobody is building Layer 5. What I built: Secretary Agent + Education System Secretary Agent — a routing layer that sits between you and a 6-agent team (Architect, Governor, Researcher, Developer, Tester + the Secretary itself). The key innovation is ABCDL classification — it doesn't classify what you're talking about, it classifies what you're doing: A = Thinking/exploring → routes to Architect for analysis B = Ready to execute → routes to Developer pipeline C = Asking a fact → Secretary answers directly D = Continuing previous work → resumes pipeline state L = Wants to learn → routes to education system Why this matters: "I think we should redesign Phase 3" and "Redesign Phase 3" are the same topic but completely different actions. Every existing triage/router system (including OpenAI Swarm) treats them identically. Mine doesn't. The first goes to research, the second goes to execution. When ambiguous, default to A. Overthinking is correctable. Premature execution might not be. Before dispatching, the Secretary does homework — reads files, checks governance docs, reviews history — then constructs a high-density briefing and shows it to you before sending. Because intent translation is where miscommunication happens most. The education system: the exam IS the course When you send a message that touches a knowledge domain you haven't been assessed on, the system asks: Before routing this to the Architect, I notice you haven't reviewed how the team pipeline works. This isn't a test you can fail — it's 8 minutes of real scenarios that show you how the system actually operates. A) Learn now (~8 min) B) Skip C) 30-second overview If you choose A, you get 3 scenario-based questions — not definitions, real situations: You answer. The system reveals the correct answer with reasoning. Testing effect (retrieval practice) — cognitive science shows testing itself produces better retention than re-reading. I just engineered it into the workflow. The anti-gaming design: every "shortcut" leads to learning. Read all answers in advance? You just studied. Skip everything? System records it, reminds you more frequently. Self-assess as "understood" but got 3 wrong? Diagnostic score tracked separately, advisory frequency auto-adjusts. It is impossible to game this system into "learning nothing." That's by design. Other things worth mentioning Agents can say no to you. Tell the Secretary to skip the preview gate, it pushes back: "Preview gating is mandatory. Skipping may cause routing errors. Override?" You can force it — you always can — but the override gets logged and the system learns. Cross-model adversarial review. The Architect proposes a solution, then attacks its own proposal using a second AI model (Gemini). Only proposals that survive cross-model scrutiny get through. Constitutional governance. 9 Architecture Decision Records protected by governance rules. You can't unilaterally change them
View originalI built a Claude Code skill that generates broadsheet newspapers to explain any topic
https://preview.redd.it/4qmgl16y2eqg1.png?width=1920&format=png&auto=webp&s=3eb491be3e6f8d51047a0981f171c90c7ced18f4 I saw @\sambodanis's "newspaper explainer" concept on Twitter today (Claude generating newspapers about what it is doing) and looked so wonderful to me that I turned it into a Claude Code skill. tl;dr: it generates self-contained HTML broadsheet newspapers with masthead, animated ticker, lead story with drop cap, sidebar widgets, a three-column stories row, letters to the editor, and a dispatch board. Think of "visual explainer" skill but with newspaper oriented. Here's a gallery with four different themes: https://newspaper-gallery.vercel.app It also comes with commands for project recaps, diff reviews, and plan reviews, so you can generate newspapers about your actual codebase too (but it works with anything worth explaining). It's distributed as npx skill and also supports Anthropic /plugin system. npx skills add https://github.com/alberduris/skills --skill newspaper-explainer Source: https://github.com/alberduris/skills/tree/main/plugins/newspaper-explainer submitted by /u/Fragrant-Street-4639 [link] [comments]
View originalI asked Opus 4.6 to give an in depth breakdown of economic health
Just sharing this because I thought it was interesting. It's a topic I wanted to learn more about, but also wanted to test out Opus and see how well it did. If anyone is educated in economics, I'd love to hear you weigh in on how well Opus did and where it missed. Claude provided a .docx and a summary explanation. Here's the prompt I used: Provide a holistic explanation of what a healthy economy is in all aspects. Many people confuse healthy economy with individual numbers such as: unemployment, the DOW, GDP, inflation. The problem is that this overlooks lots of details such as economic cycles, national debt, underemployment, different types of inflation, that fact that some inflation can be good, discrepancies between stock prices and an actual healthy economy. Key focus areas for this breakdown include inflation, employment, stock markets, interest rates, real estate, income, cost of living and anything else that you deem a key pillar of economic health. After this, also include an economic health check for the USA currently. Lastly, cite academic and practical economic theories and philosophies that relate to you explanations. Output: UNDERSTANDING A HEALTHY ECONOMY Beyond the Headlines: A Comprehensive Guide to Economic Health, Measurement, and the Current U.S. Picture Prepared March 2026 Sources: BEA, BLS, Federal Reserve, FRED, Census Bureau, CBO Introduction: Why Single Numbers Mislead When news outlets report that "the economy is strong" or "the economy is struggling," they almost always point to one or two headline numbers: the unemployment rate, the Dow Jones Industrial Average, GDP growth, or the latest inflation reading. While each of these metrics captures something real, none of them individually tells you whether the economy is healthy. Treating any single indicator as the definitive measure of economic health is like judging a person's health by checking only their blood pressure. A truly healthy economy is one where output is growing sustainably, prices are stable but not stagnant, most people who want work can find meaningful employment, incomes are rising faster than costs, financial markets reflect fundamentals rather than speculation, housing is accessible, and the government's fiscal position is not on an unsustainable trajectory. These conditions must hold simultaneously and, critically, they must hold broadly across the income distribution, not just for the top quintile of earners. This document breaks down each major pillar of economic health, explains the nuances that headlines miss, provides a current health check for the United States as of early 2026, and ties each concept to the academic and practical economic theories that underpin our understanding. Key Principle: A healthy economy is not defined by any single metric performing well. It requires a balance across multiple dimensions, sustained over time, and distributed broadly across the population. Pillar 1: Economic Output (GDP) What GDP Measures and What It Misses Gross Domestic Product measures the total market value of all final goods and services produced within a country's borders over a given period. Economists typically track real GDP (adjusted for inflation) to strip out price changes and focus on actual output growth. A healthy economy generally shows real GDP growth between roughly 2–3% annually for a mature economy like the United States, which is enough to absorb population growth and productivity gains without overheating. However, GDP has significant blind spots. It does not capture the distribution of income, meaning GDP can rise sharply while most households see stagnant or declining real incomes. It excludes unpaid work such as caregiving and household labor. It also counts activities that may not improve wellbeing—rebuilding after a natural disaster adds to GDP, but the population is not better off. Environmental degradation and resource depletion are not subtracted. Simon Kuznets, who developed the national income accounts that became GDP, famously warned in 1934 that "the welfare of a nation can scarcely be inferred from a measurement of national income." The Business Cycle: Expansions, Peaks, Contractions, Troughs GDP does not grow in a straight line. Economies cycle through expansions (rising output, falling unemployment), peaks (where growth begins to slow), contractions or recessions (declining output, rising unemployment), and troughs (where the economy bottoms out before recovering). The National Bureau of Economic Research (NBER) officially dates U.S. business cycles and defines a recession not simply as two consecutive quarters of negative GDP growth, but as a "significant decline in economic activity that is spread across the economy and lasts more than a few months." This definition matters because it incorporates employment, income, and industrial production alongside GDP. Understanding where you are in the cycle is essential context for interpreting any economic
View originalClaude Code vs Codex CLI — orchestration workflows side by side
Been deep in agentic engineering and wanted to see how Claude Code and Codex CLI handle orchestration differently. Claude Code follows a Command → Agent → Skill pattern with mid-turn user interaction, while Codex CLI uses a simpler Agent → Skill pattern since custom commands and ask-user tools aren't available yet. Both repos are open-source reference implementations with flow diagrams, best practices, and working examples using a weather API demo. The architectural differences reveal a lot about where each tool is headed. Claude Code: https://github.com/shanraisshan/claude-code-best-practice Codex CLI: https://github.com/shanraisshan/codex-cli-best-practice submitted by /u/shanraisshan [link] [comments]
View originalz.ai debuts faster, cheaper GLM-5 Turbo model for agents and 'claws' — but it's not open-source
Chinese AI startup Z.ai, known for its powerful, open source GLM family of large language models (LLMs), has introduced GLM-5-Turbo, a new, proprietary variant of its open source GLM-5 model aimed at agent-driven workflows, with the company positioning it as a faster model tuned for OpenClaw-style tasks such as tool use, long-chain execution and persistent automation. It's available now through Z.ai's application programming interface (API) on third-party provider OpenRouter with roughly a 202.8K-token context window, 131.1K max output, and listed pricing of $0.96 per million input tokens and $3.20 per million output tokens. That makes it about $0.04 cheaper per total input and output cost (at 1 million tokens) than its predecessor, according to our calculations. Model Input Output Total Cost Source Grok 4.1 Fast $0.20 $0.50 $0.70 xAI Gemini 3 Flash $0.50 $3.00 $3.50 Google Kimi-K2.5 $0.60 $3.00 $3.60 Moonshot GLM-5-Turbo $0.96 $3.20 $4.16 OpenRouter GLM-5 $1.00 $3.20 $4.20 Z.ai Claude Haiku 4.5 $1.00 $5.00 $6.00 Anthropic Qwen3-Max $1.20 $6.00 $7.20 Alibaba Cloud Gemini 3 Pro $2.00 $12.00 $14.00 Google GPT-5.2 $1.75 $14.00 $15.75 OpenAI GPT-5.4 $2.50 $15.00 $17.50 OpenAI Claude Sonnet 4.5 $3.00 $15.00 $18.00 Anthropic Claude Opus 4.6 $5.00 $25.00 $30.00 Anthropic GPT-5.4 Pro $30.00 $180.00 $210.00 OpenAI Second, Z.ai is also adding the model to its GLM Coding subscription product, which is its packaged coding assistant service. That service has three tiers: Lite at $27 per quarter, Pro at $81 per quarter, and Max at $216 per quarter. Z.ai’s March 15 rollout note says Pro subscribers get GLM-5-Turbo in March, while Lite subscribers get the base GLM-5 in March and must wait until April for GLM-5-Turbo. The company is also taking early-access applications for enterprises via a Google Form, which suggests some users may get access ahead of that schedule depending on capacity. z.ai describes GLM-5-Turbo as designed for “fast inference” and “deeply optimized for real-wor
View originalBuilt a Claude Code skill: paste a YouTube URL, get a structured summary with clickable timestamps
Built a small Claude Code skill called video-lens. You type /video-lens and it: Fetches the YouTube transcript Asks Claude to summarize it Writes a structured HTML report and opens it in your browser What the report includes: 2–3 sentence TL;DR Bulleted key takeaways Clickable timestamps that seek the embedded YouTube player Dark mode + one-click Markdown export Works on non-English videos too, the summary stays in the video's original language. There's also an optional Raycast script for macOS so you can trigger it with a hotkey from anywhere. I built this mainly to learn how Claude skills work, but honestly I've been using it every day since. It's especially useful for long, knowledge-dense videos where you want to quickly extract concepts and jump to specific sections rather than scrub through. Repo: https://github.com/kar2phi/video-lens Would love feedback on: The skill prompt itself (skill/SKILL.md) and the HTML output design. https://preview.redd.it/im6buygkjdog1.png?width=4357&format=png&auto=webp&s=81ba2a74ab3e2a0ddbb1d5d7f0219dc231358115 Also curious: is the install flow okay? And is the Raycast integration useful to people, or are there better ways to integrate this into your workflow? submitted by /u/Hot-Lavishness5612 [link] [comments]
View originalHands down the best guide to Claude Cowork
Just came across a nice article on how to set up Claude Cowork. Link to the article in the comments. What’s been your experience with Claude Cowork? submitted by /u/Hot-Avocado-6497 [link] [comments]
View originalMicrosoft says ungoverned AI agents could become corporate 'double agents.' Its fix costs $99 a month.
Microsoft today announced the general availability of Agent 365 and Microsoft 365 Enterprise 7, two products designed to bring security and governance to the rapidly growing population of AI agents operating inside the world's largest organizations. Both become available on May 1st, alongside Wave 3 of Microsoft 365 Copilot, which expands the company's agentic AI capabilities and adds model diversity from both OpenAI and Anthropic. Agent 365, priced at $15 per user per month, serves as what Microsoft calls the "control plane for agents" — a centralized system for IT, security, and business teams to observe, govern, and secure AI agents across an enterprise. Microsoft 365 Enterprise 7, dubbed the "Frontier Worker Suite," bundles Agent 365 with Microsoft 365 Copilot and the company's most advanced security stack into a single $99-per-user-per-month license. The timing is deliberate. AI agents have crossed from experimental prototypes into operational infrastructure, but the tools to monitor them have lagged behind. Microsoft is racing to close that gap before adversaries exploit it. "These agents are no longer experimental. We're seeing them deeply embedded in organizations, in the operational structure of these organizations, with people using them," Vasu Jakkal, corporate vice president of Microsoft Security, told VentureBeat in an exclusive interview. "At the same time, as the agents are scaling fast, some of the people and organizations have a visibility gap, and that visibility gap creates business risk." Over 80% of Fortune 500 companies use AI agents, but nearly a third aren't sanctioned The numbers behind the announcement tell a story of breakneck adoption outpacing oversight. According to Microsoft's Cyber Pulse report, published in February, more than 80 percent of Fortune 500 companies are actively using AI agents built with low-code and no-code tools. IDC projects 1.3 billion agents in circulation by 2028. And Microsoft, serving as its own first customer f
View originalX Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix”
X Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix” Alan Macleod On January 30, the Department of Justice released its latest tranche of 3.5 million documents relating to Jeffrey Epstein. Years of emails, texts, and images were suddenly in the public domain. Epstein, a serial rapist, masterminded a global human trafficking and sexual abuse network, and could count princes, professors, and politicians among his closest friends and accomplices. MintPress News has been at the forefront of covering the Epstein saga, revealing his extremely close links to American and Israeli intelligence groups – a discovery that perhaps sheds light on why it took so long for the world’s most notorious pedophile to face accountability for his crimes. Many of the DOJ files have been heavily redacted in order to protect Epstein’s powerful clients. Still, they have exposed a massive elite nexus revolving around the New York billionaire, implicating presidents, diplomats, and plutocrats in his crimes, and imply that Epstein was significantly more powerful than first thought, shaping modern politics in ways never previously understood. With shocking new details emerging on a near-hourly basis, here are ten Epstein- related stories that have flown relatively under the radar. The Israeli Government Installed Surveillance Cameras at Epstein’s New York Apartment The Israeli government installed and maintained a hi-tech surveillance system at Epstein’s Manhattan apartment complex, including a network of alarms and cameras, emails show. Starting in 2016, the director of protective service at the Israeli mission to the United Nations controlled guests’ access to the Manhattan residence, and even performed background checks on prospective cleaners and other Epstein employees. Former Israeli prime minister Ehud Barak admitted visiting the apartment up to 100 times, and stayed there for long periods of time. While Barak’s security may have been a concern, Epstein is known to have housed underage girls at the apartment, and many of his worst sexual crimes and most sordid parties were held there, raising questions as to what sort of images and data the Israeli government had access to. Epstein Plotted War With Iran Ehud Barak became one of Epstein’s closest associates, staying for extended periods of time at the billionaire’s residences. The pair would email, text, call, and meet constantly. A search for “Ehud Barak” elicits more than 3500 results in the latest file dump alone. The pair would talk politics, and shared a vision of the United States attacking Iran. In 2013, with negotiations between the International Atomic Energy Agency and Iran stalling, Epstein emailed Barak stating, in typically poor spelling and grammar: “hopefully somone suggests getting authorization now for Iran. the congress woudl do it.” Epstein would get his wish in 2025, when his close associate Donald Trump began bombing the country. Noam Chomsky Considered Epstein His “Best Friend” Epstein arranged a meeting between Barak and renowned leftist academic (and vehement critic of the U.S. and Israel) Noam Chomsky. An unlikely friendship between the notorious pedophile and star professor blossomed, with the pair regularly meeting up at each other’s houses for dinner. Chomsky flew on Epstein’s “Lolita Express” jet to attend a dinner with Woody Allen in New York. He also expressed his desire to visit Little St. James Island, Epstein’s notorious Caribbean hideaway, and the center of his trafficking operation. Chomsky considered Epstein his “best friend” according to an email sent by his wife, Valeria. The usually curt and matter-of-fact academic signed off his emails to Epstein with unexpectedly flowery language, such as “Like real friendship, deep and sincere and everlasting from both of us, Noam and Valeria.” Chomsky strongly supported Epstein until his dying day in a Manhattan prison cell, taking it upon himself to act as his unofficial crisis manager, describing his accusers as “publicity seekers or cranks of all sorts,” and denouncing the media as a “culture of gossip-mongers” destroying his stellar character. “Ive watched the horrible way you are being treated in the press and public,” he wrote, advising Epstein on tactics to fight the supposed smears against him. For a full rundown of the Chomsky-Epstein relationship, see the MintPress News investigation: “The Chomsky-Epstein Files: Unravelling a Web of Connections Between a Star Leftist Academic and a Notorious Pedophile.” Steve Bannon Developed a Plan to Help Epstein “Crush the Pedo Narrative” A second public figure running defense for Epstein was Steve Bannon. In public, the far-right strategist claimed that he was working on a documentary exposing Epstein. In private messaging, however, Bannon, like Chomsky, was advising Epstein on how best to repair his image. Just weeks before Epstein’s arrest and subsequent death, Bannon was messaging him, devising a complex media strategy
View originalSen. Sheldon Whitehouse (D-RI) lays out the connections between Trump, Russia, and Epstein (transcript included)
**NOTE:** This transcript now appears in [the Senate section of the official *Congessional Record* of March 5, 2026, pages 18 - 23,](https://www.congress.gov/119/crec/2026/03/05/172/42/CREC-2026-03-05-senate.pdf) with Sen. Whitehouse's own list of sources appended. ----- The following is the YouTube transcript which I cleaned up, checked for errors, lightly edited for readability, verified spelling of proper names via Wikipedia, and added links to any quotes that I checked myself. (EDITED to add links to individuals mentioned, correct placement of quotes, and insert links to original articles where I could find them online) I found myself doing it anyway just for me, to keep track of who's who, and then I realized I might as well do it for you as well. This is an unparalleled speech: while the substance of it might be available elsewhere and I've just missed it, Sen. Whitehouse has answered a lot of questions in my mind about not just the links between Trump, Russia, and Epstein -- and William Barr as one of many links -- but also about the recording equipment and blackmail angle that is present in so many survivor accounts and so noticeably absent everywhere else. It's truly worth listening to, but if you can't sit still that long, here's the transcript. ----- Thank you, Madam President. It was the spring of 2019. Public and media interest in special counsel [Robert Mueller's report into Russia's election interference operation](https://en.wikipedia.org/wiki/Mueller_special_counsel_investigation) reached a fever pitch. There had been a steady drip, drip, drip of reporting on the Trump team's cozy and peculiar relationship with Russia. Since his surprise election victory in 2016, ahead of the Mueller report's release, Trump's Attorney General, Bill Barr, [issued a letter to Congress purporting to summarize the report's findings.](https://en.wikipedia.org/wiki/Barr_letter) The letter declared that Russia and the Trump campaign did not collude to steal the election. The press, ravenous for any news of the long-anticipated Mueller report's conclusion, largely accepted [Attorney General Barr's](https://en.wikipedia.org/wiki/William_Barr) narrow, carefully worded conclusion and, not yet having access to the full report, blasted the attorney general's summary around the world. Trump himself declared, all caps, NO COLLUSION. He said he had been cleared of the Russia "hoax," a term he reserves only to describe things that are true, like climate change. Frustrated, Mueller wrote to Barr that the attorney general's letter did not fully capture the context, nature, and substance of the investigation. But by the time [the dense, voluminous Mueller report](https://en.wikipedia.org/wiki/Mueller_report) was issued the month after Barr's letter, its message had been obscured. The Mueller report actually concluded that the Trump campaign knew of and welcomed Russian interference and expected to benefit from it. That conclusion was later echoed and reinforced by [an investigation led by then-chairman Marco Rubio's Senate Intelligence Committee,](https://en.wikipedia.org/wiki/Mueller_report#Senate_Intelligence_Committee) a bipartisan report. But Barr's scheme had largely worked. Many in the media and in the Democratic Party seemed to internalize that the Russia speculation had perhaps gotten out of hand, and that perhaps we had been wrong to believe there was a troubling connection between Trump and Russia after all. But were we? Let's take a look at a sampling of what Trump has done for Russia just lately, and usually at the expense of American interests. There are many, but here's a top 10. **One,** after Trump and Vice President Vance theatrically chastised the heroic Ukrainian President Zelenskyy in front of TV cameras in the Oval Office last year, Trump paused our weapons shipments to Ukraine. **Two,** in July, during the worst Russian bombing campaign of the war until that point, Trump paused an already funded weapons shipment for Ukraine, including the Patriot interceptors that protect civilians from Putin's savage attacks. **Three,** that same month, Trump's Treasury Department stopped imposing new sanctions and closing sanctions loopholes, effectively allowing dummy corporations to send funds, chips, and military equipment to Russia. **Four,** leaked phone calls show that White House envoy [Steve Witkoff](https://en.wikipedia.org/wiki/Steve_Witkoff) and Putin envoy [Kirill Dmitriev](https://en.wikipedia.org/wiki/Kirill_Dmitriev) have worked together closely behind the scenes on a peace deal favorable to Russia. **Five,** last summer, Trump rolled out the presidential red carpet for the Russian dictator on American soil. with a summit in Alaska that yielded unsurprisingly no gains toward ending the war in Ukraine. **Six,** Trump's vice president traveled to the Munich Security Conference last year to parrot Russia's anti-western talking points pushed by right-wing groups that Puti
View originalDeepL uses a subscription + tiered pricing model. Visit their website for current pricing details.
Key features include: Text translation, Document translation, Glossary, Image translation, Formality, Dictionary, Unlimited text translation and improvements, Increased file translation capacity.
Based on user reviews and social mentions, the most common pain points are: ai agent, openai, anthropic, claude.
Based on 30 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Lenny Rachitsky
Founder at Lenny's Newsletter
3 mentions