Vera Raises $2.7M to Ensure AI Models are Secure, Clear of Bias, and Following Brand Guidelines

Vera Raises $2.7M to Ensure AI Models are Secure, Clear of Bias, and Following Brand Guidelines

Source Node: 2312351

The rate of AI development is ushering in a new Moore’s law with the development doubling every few years according to some experts.  According to Stanford, the rate of doubling is as frequent as every three months.  Irrespective of the actual rate of doubling, the compound growth is exponential and impressive. However, this growth comes with caveats with AI still suffering from a critical problem – differentiating between reality and hallucination when encountering data. In addition, AI models can contain embedded bias, privacy risks, unique cyber threats, and can also be used as a channel to create more risk when manipulated.  Vera is a platform that enforces and automates privacy, security, and fairness policies for AI within an enterprise.  The platform sits between model inputs and outputs to ensure that any content that should be safeguarded is blocked or redacted according to your policies irrespective of where the model originates from.  For conversational AI models that are public-facing, Vera will ensure that outputs or answers are consistent with brand guidelines across images, text, code, or videos.  In addition, the platform also works to protect models and ensure security so that models can’t be tricked by nefarious attacks.

AlleyWatch caught up with Vera CEO Liz O’Sullivan to learn more about the business, the company’s strategic plans, recent round of funding, and much, much more…

Who were your investors and how much did you raise?
The pre-seed round was led by Differential Venture Partners, with participation from Everywhere VC, Essence VC, Betaworks, Greycroft, SaaS Ventures, and ATP Ventures.

Tell us about the product or service that Vera offers.
Vera identifies risks in model inputs, to block, redact, or transform requests that may contain PII, security credentials, intellectual property, prompt hacking, and submissions that violate company policies on what AI is allowed to answer. Vera also enforces policies on what models are allowed to say in return, giving companies control over the behavior of models in production. The platform reduces risks of model bias, criminal behavior, and aligns outputs with brand safety guidelines.

What inspired the start of Vera?
When Justin (cofounder) and I first began our work in AI, we were astounded by its potential to transform the world. Back then, AI seemed like magic to me, doing things I didn’t think technology would ever be able to do. Of course, AI is not magic, it’s math. I learned this lesson when my team was responsible for building and labeling datasets that would one day become models, while Justin learned it as a Marine officer, doing math on a whiteboard before GPUs were fit for the task. For both of us, it was a front-row seat for all the crazy, unpredictable things that could go wrong in AI without good trust and safety practices in place.
A few years ago, companies around the world loudly proclaimed their “AI Principles”, assuring the public that they would only release models that did more good than harm. Today, it’s undeniable that AI is useful, and everyone can tell that its unique challenges require guardrails. In a post-GPT world, it’s clear that the time has come to move beyond AI principles into practice, where new, surprising properties arise each and every day.
Our team has witnessed, firsthand, the many ways AI can help society, and just as many ways it can be abused. We believe in the power of AI to benefit the world, but we would be naive, after all we’ve seen, to think that this will happen without great care and prudence, which is why Vera exists today.

How is Vera different?
Vera is different because our diverse team has experience spanning academia, industry, and government, making us an ideal partner for reducing AI risk in any setting. Our platform is cloud and model-agnostic, so teams can avoid vendor lock-in and use the best models of today and tomorrow without having to choose a one-size-fits-all model approach.

What market does Vera target and how big is it?
We operate in the AI risk market, which is estimated to have been $1.7 billion in 2022, and is projected to reach $7.4 billion by 2032

What’s your business model?
SaaS.

How are you preparing for a potential economic slowdown?
Perhaps the only segment that seems to be robust against economic downturn is the drive to deploy AI, and we’re poised to offer the tools that allow this technology to flourish in a safe and effective way.

What was the funding process like?
Everyone can tell that the VC market is different from what it used to be when I first started my 12-year journey in tech. That said, we’ve been extremely lucky that our space and category have garnered a metric ton of interest and ended up with the round oversubscribed!

What are the biggest challenges that you faced while raising capital?
My team is used to being underestimated in various walks of life due to our demographics. This reality has just built in each of us unstoppable drive and commitment to achieving what others might think of as impossible. Fundraising is a bit like speed dating; if it doesn’t feel right at first, that’s perfectly ok! There are dozens more opportunities right behind those where it’s clearly not a good fit.

What factors about your business led your investors to write the check?
The team at Differential loves that we are model and cloud agnostic, with the ability to cater to any type of Generative AI whether text, image, or code. They see that the expertise in our backgrounds makes us well-suited to capture this market, and have been fantastic to work with since day 1!

What are the milestones you plan to achieve in the next six months?
Our goals this year include hardening the platform and deploying the technology to our growing waitlist of customers eager to get started with their Generative AI strategy.

What advice can you offer companies in New York that do not have a fresh injection of capital in the bank?
My favorite Churchill quote has been bouncing around my head for the last few months: “Never give in, never give in… except to convictions of honor and good sense.”

Where do you see the company going now over the near term?
In the near term we’re focused on hiring and building the team and culture so we can be ready for this next phase of hypergrowth.

What’s your favorite fall destination in and around the city?
It’s cliché, but I still love the High Line. The colors, the art, the juxtaposition of industrial and nature all encapsulate what I love most about this city; its unexpected, breath-taking beauty around every corner.


You are seconds away from signing up for the hottest list in Tech!

Sign up today


Time Stamp:

More from AlleyWatch