CalypsoAI
Define and deploy agile data security, threat management, and governance for AI models, apps, and agents.
Define and deploy agile data security, threat management, and governance for AI models, apps, and agents. Safeguard AI systems from evolving threats like prompt injection and jailbreaks. Choose from preset guardrails or create bespoke policies for specific use cases. Detect and prevent data leakage, compliance failures, and policy violations at runtime. Ensure regulatory compliance, obstruct harmful outputs, and enforce restrictions on model and agent privileges. Achieve continuous visibility and traceability across all AI interactions. AI expands the attack surface in every direction. To maintain your security posture, teams need solutions that balance efficient workflow automation with strategic prioritization and continuous protection against evolving threats. F5 AI Guardrails meets the evolving needs of AI security by providing scalable data governance, augmented threat management, and risk auditing for present and future challenges. Apply tailored risk evaluation frameworks to public foundational models and in-house models alike. Inspect AI interactions across models and apps, and provide real-time protection for DLP and policy violations. Ensure enterprise-wide policy alignment with automated auditing templates for GDPR, HIPAA, EUAIA, and more. Rapidly translate insights from F5 AI Red Team and agentic threat intelligence into active defense strategy. Dynamic model routing to avoid failover states, and maintain performance without compromising security. Avoid detrimental outputs with content moderation filters for toxic, biased, or inaccurate content. Safeguard frontier models with preset configurations for the most popular enterprise and open-source AI. See how F5 AI Guardrails performed against 17,733 adversarial test cases, independently validated by SecureIQLab.
Vijil
Cut time-to-trust in AI agents from 6 months to 6 weeks. Vijil makes agents reliable, secure & safe for enterprises with testing & protection.
To help enterprises use AI agents that are verifiably reliable, secure, and safe by providing trust as infrastructure for agent development, operations, and continuous improvement. Previously GM Director of Engineering at Amazon SageMaker. 30y across AI/ML, Data, Cloud, OS, Security; 11 AWS AI services, 30 products, 10 patents, 5 papers. AWS AI senior leader; 20y in ML systems and graphics; led PyTorch, TensorFlow, and AWS SageMaker Training teams. Previously COO at Astronomer; helped scale Lacework from $1M to $100M ARR; 20y GTM strategy partnerships for cybersecurity; consulting and investment banking; Harvard. Assistant Professor of Statistical Sciences at the University of Toronto, a Faculty Member at the Vector Institute for Artificial Intelligence, and a Faculty Affiliate at the Schwartz Reisman Institute for Technology and Society. Responsible AI leader; 10y+ in data science; co-author Trustworthy ML (O'Reilly book); 40 papers, 20 patents; key contributor to OSS (Garak, AVID, AI Village). Previously at Amazon Music,Oracle, and Viiv Labs; co-founder CTO of Adya (acquired by Qualys). Passionate about designing and building large-scale ML systems with a focus on NLP/LLMs. Enjoys reading, hiking, cooking, doing nothing. Previously at Riva Health, Viiv Labs, Solvvy, and Polycom. Over 20 years of software engineering experience. Most recently, led threat modeling and cybersecurity analysis of medical device to prepare for FDA approval. University of California, Berkeley. Previously at CapitalOne, evaluating LLMs for company-wide use. Working in the field of responsible AI since 2019, including building explainability solutions, establishing responsible AI processes, and publishing interdisciplinary research at venues like FAccT. Tries to spend at least one week a year walking in the mountains. UX/UI design and front-end developer, previously at bitlogic.io. Based in Cordoba, Argentina. Instituto Superior Politécnico de Córdoba. Previously at Amazon, Oracle, and Accenture. Working on AI/ML security engineering since 2019. Most recently, led red-teaming for Amazon AI models. Indiana University. Cloud infrastructure engineer. Most recently at MIST (acquired by Juniper), built the conversational interface to Marvis Virtual Network Assistant, designed to diagnose and resolve networking issues. University of Illinois at Urbana-Champaign. Previously at Microsoft. Research interest in trustworthy AI, ML for human safety, and autonomous vehicles. University of Michigan. Senior Applied Scientist. Previously at Lorica Cybersecurity, designed and deployed privacy-preserving machine learning products; expertise in the use of fully-homomorphic encryption and trusted execution environment for LLMs. University of Toronto. At intersection of algorithmic fairness auditing and collective action. PhD UIUC, MS Harvard, BS Caltech. Previously at Goldman Sachs, with internships at Instacart and Snap. Previously postdoc in game theory and r
CalypsoAI
Vijil
CalypsoAI
Vijil
CalypsoAI (6)
Vijil (2)
Only in CalypsoAI (10)
Only in Vijil (8)
CalypsoAI
Vijil