Enkrypt AI stands apart by merging threat detection, privacy, and compliance into a comprehensive toolkit ensuring business adoption of LLMs and generative AI is safe, reliable and compliant.
Generative AI and large language models present an opportunity for enterprises to gain new efficiencies and improve functionality, however, the safety and security of such technology remains an obstacle. Enkrypt AI is today announcing a $2.35M funding round to solve this problem for enterprises, ensuring their use of generative AI and LLMs is safe, secure and compliant. The seed funding round was led by Boldcap with participation from Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund and angel investors in the AI, healthcare and enterprise space.
Enkrypt AI was founded by two Yale PhDs and AI practitioners Sahil Agarwal (CEO) and Prashanth Harshangi (CTO) in 2022. With Enkrypt AI, enterprises have a control layer between these LLMs and end-users, providing security and safety functionality. Enkrypt AI Sentry has been able to reduce vulnerabilities across a wide range of LLMs, demonstrating a reduction in jailbreaks from 6% to 0.6% in the case of LlaMa2-7B. The Enkrypt AI team has previously developed and deployed AI models across diverse sectors, including the US Department of Defense and various businesses in self-driving cars, music, insurance and fintech.
Enkrypt AI’s Sentry is the only platform that combines both visibility and security for generative AI applications at the enterprise, so that enterprises can secure and accelerate their Generative AI Adoption with Confidence. A leading Fortune 500 data infrastructure company is using Sentry to have complete access control and visibility over all their LLM projects, helping them to detect and mitigate LLM attacks such as jailbreaks and hallucinations, and prevent sensitive data leaks. This is ultimately leading to faster adoption of LLMs for even more use-cases across departments.
Sahil Agarwal, Co-founder and CEO of Enkrypt AI commented: “Businesses are really excited about using LLMs, but they’re also worried about how trustworthy they are and the uncertain regulatory landscape. Based on our conversations with CIOs, CISOs and CTOs, we are convinced that for LLMs to be widely adopted, it must be built on a foundation of security, privacy, and compliance. With Sentry, we are merging visibility and security, to ultimately align with and support adherence to regulatory frameworks like the White House Executive Order on AI, the EU AI Act, and other AI-centric regulations, laying the groundwork for safe and compliant AI integration.”
Prashanth Harshangi, Co-founder and CTO at Enkrypt AI commented: “As the benefits of AI become ever more tangible, so do the risks. Our platform does more than just detect vulnerabilities; it equips developers with a comprehensive toolkit to fortify their AI solutions against both current and future threats. We’re championing a paradigm where trust and innovation coalesce, enabling the deployment of AI technologies with the confidence that they are as secure and reliable as they are revolutionary.”
Enkrypt AI is proven to help enterprises accelerate their generative AI adoption by up to 10x, deploying applications into production within weeks compared to the current forecast of 2 years within enterprises. Their comprehensive approach addresses the key concerns causing hesitation among enterprise decision-makers:
- Delivers unmatched visibility and oversight of LLM usage and performance across business functions.
- Ensures data privacy and security by protecting sensitive information and guarding against threats.
- Manages compliance with evolving standards through automated monitoring and strict access controls.
The safety of AI has been a key concern for policymakers and experts. Earlier this month, the US Government’s NIST standards body established an AI safety consortium. In an era where generative AI is becoming a transformative force across industries, safeguarding these systems goes beyond best practice – it’s a necessity.
Sahil Agarwal added: “Our mission at Enkrypt AI is to provide the tools that allow enterprises to not only harness the incredible potential of generative AI but to do so with the utmost confidence in the security and compliance of their applications. With the support of our investors and the advanced capabilities of our platform, we are setting a new standard in AI safety – protecting users and organizations against emerging threats while enabling the wider adoption of AI innovations in a responsible manner.”
Sathya Nellore Sampat, General Partner at BoldCap – “We are super excited to be backing practitioners like Sahil and Prashanth who are at the intersection of Security and Gen AI.
Enterprise security is non-negotiable. With the explosive growth of Gen AI and LLM usage within companies, the attack surface has dramatically increased. Enkrypt is the command center to control, monitor and have visibility across Gen AI initiatives.”
About Enkrypt AI
Enkrypt AI, co-founded by Yale PhDs Sahil Agarwal and Prashanth Harshangi, is pioneering the safe adoption of Generative AI within enterprises. With an innovative all-in-one platform, Enkrypt AI is revolutionizing how Large Language Models (LLMs) are integrated and managed, addressing critical needs for reliability, security, data privacy, and compliance in a unified solution.
Used by mid to large-sized enterprises in industries including finance and life sciences, Enkrypt AI’s Sentry offers a proactive approach to AI security, fostering trust and efficiency in AI implementations from chatbots to automated reporting. Enkrypt AI sits between users and AI models, to offer a variety of safety and security layers.
Enkrypt AI stands apart by merging threat detection, privacy, and compliance into a comprehensive toolkit, poised to become the definitive Enterprise Generative AI platform for an evolving regulatory landscape. For more information please visit https://www.enkryptai.com/ or follow via LinkedIn, X, Instagram or YouTube.