Lasso’s AI Security Platform gives enterprises visibility, control, and protection across AI models, agents, and apps. Reduce GenAI risk in real time.
Protection for every AI application you build and deploy, wherever it runs. Decentralized ownership has led to Shadow AI across the enterprise. Security teams have zero visibility into the AI tools and agents employees are using and creating. Traditional security relies on predictable rules, but AI is non-deterministic. Preventing risk requires analyzing the intent behind an agent's actions rather than relying on fixed patterns. Foundational models are inherently vulnerable, introducing risk into the software supply chain. Frequent provider updates can instantly change the behavior of any agent or application built on top of them. AI systems are a critical attack surface. Adversaries are exploiting them by manipulating model behavior and bypassing agent guardrails, taking advantage of security gaps that cannot detect these anomalies or threats in real time. Comprehensive security across all your AI users, models, agents, and applications from build time to runtime. Lasso is purpose-built for enterprises with speed, scale, precision, and cost efficiency at the core of our AI Security Platform. More cost-effective than cloud-native guardrails Per classification using the fastest LLM as a judge Patents-pending on proprietary AI innovation Accuracy rate across content, context, and intent detections Attack types techniques used by our offensive AI agents “Lasso’s investigative tools have been incredibly valuable. But they also help to prevent risks proactively by educating our employees about responsible AI usage. This has been key to enabling innovation while maintaining compliance and security.” Lasso's full security suite has been crucial in fortifying our GenAI applications. Their approach ensures our organization, customers, data, and employees stay protected from various attacks while allowing me full control over my environment. As a consultant, I’ve worked with countless security tools, but Lasso Security stands out with its comprehensive suite and LLM-first approach. It offers robust observation and protection for sensitive data and enables fast remediation and real-time response. In the fast-evolving AI landscape, Lasso delivers true value. As a CEO focused on driving innovation and growth, ensuring the security of AI initiatives for our clients is paramount. We’re proud to have Lasso as our trusted security partner in adopting GenAI, enabling us to focus on what we do best—innovating and growing Lasso Security’s comprehensive security suite has been a critical part in securing our GenAI infrastructure. The level of control and visibility it provides ensures that both our internal data and client information are shielded from emerging threats and gives us the confidence to embrace GenAI safely
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Use Cases
Industry
information technology & services
Employees
64
Funding Stage
Seed
Total Funding
$6.2M
Claude Code with --dangerously-skip-permissions is a real attack surface. Lasso published research + an open-source defender worth knowing about.
If you use Claude Code with --dangerously-skip-permissions, this is worth 10 minutes of your time. Lasso Security published research on indirect prompt injection in Claude Code. The short version: when Claude reads files, fetches pages, or gets output from MCP servers, it can't reliably tell the difference between your instructions and malicious instructions embedded in that content. So if you clone a repo with a poisoned README, or Claude fetches a page that has hidden instructions in it, it might just... follow them. With full permissions. The attack vectors they document are pretty unsettling: Hidden instructions in README or code comments of a cloned repo Malicious content in web pages Claude fetches for research Edited pages coming through MCP connectors (Notion, GitHub, Slack, etc.) Encoded payloads in Base64, homoglyphs, zero-width characters, you name it The fundamental problem is simple: Claude processes untrusted content with trusted privileges. The --dangerously-skip-permissions flag removes the human checkpoint that would normally catch something suspicious. To their credit, Lasso also released an open-source fix: a PostToolUse hook that scans tool outputs against 50+ detection patterns before Claude processes them. It warns rather than blocks outright, which I think is the right call since false positives happen and you want Claude to see the warning in context, not just hit a wall. Takes about 5 minutes to set up. Works with both Python and TypeScript. Article: https://lasso.security/blog/the-hidden-backdoor-in-claude-coding-assistant GitHub: https://github.com/lasso-security/claude-hooks Curious whether people actually run Claude Code with that flag regularly. I can see why you would, the speed difference is real. But the attack surface is bigger than I think most people realize. submitted by /u/amitraz [link] [comments]
View originalLasso Security uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Everyone is an agent builder, AI is unpredictable, The supply chain is fractured, AI threats are on the rise, Speed, Innovation, Accuracy, Security.
Lasso Security is commonly used for: All Your AI Security Needs in One View.