News & Announcements

Vijil
August 4, 2025
In our Team Spotlight series, we talk with members of the Vijil team about their backgrounds, what drives them, and how they’re shaping the future of trustworthy AI.
In this edition, we spoke with Leif Hancox-Li, Ph.D., Senior Applied Scientist at Vijil. With a unique background that bridges philosophy, physics, and machine learning, Leif is helping lead the development of contextual and rigorous approaches to AI ethics and trust evaluation.
What’s your background?
My background’s a bit unusual. I have a Ph.D. in philosophy, but before that, I studied physics and did a lot of programming, mainly analyzing data in a physics lab.
That mix of philosophy and technical experience led me into AI ethics. I understood the math and programming side of things, and I was drawn to the deeper questions around values, judgment, and impact. Transitioning into data science and then working in trustworthy AI felt like a natural fit.
What inspired you to join Vijil?
Honestly, it was the product and the people. Vijil is building tools to evaluate how toxic or biased an AI system is, which aligns directly with my interests. It’s rare to find a company where the team genuinely cares about these issues, not just as a feature checkbox, but as a core part of what they're building.
One thing I was especially excited about was contributing to Vijil’s ethics evaluation framework. It lets organizations assess whether their AI agents align with their own ethical standards, not just generalized benchmarks. That specificity and contextuality were missing from a lot of other tools out there. I’d tried many of them, and I found them inconsistent and not very useful in practice.
The truth is, ethics can’t be one-size-fits-all. Public benchmarks often make hidden ethical assumptions that might not apply to a given use case. To evaluate whether a chatbot is ethical, for example, you need to understand what that chatbot is for, who it’s interacting with, and what the expectations are. At Vijil, we actually take all that into account.
What excites you most about working at Vijil?
I have a lot of autonomy in defining what trust in AI actually means, which is incredibly exciting.
A lot of benchmarking in AI is, I'll just say it, meaningless. It’s often based on rigid assumptions or narrow definitions of ethics that reflect a specific ideology. I think there's real value in introducing different perspectives into the field, and Vijil gives me the space to do that.
We’re working toward more grounded, contextual ways of measuring trustworthiness. It’s not just about scores, it’s about meaningful evaluation.
What’s the biggest opportunity for Vijil? What challenges come with it?
The opportunity is huge because most current evaluation methods are, frankly, not quite there. But that’s not a failure, it’s a reflection of how difficult the problem is.
When you're trying to measure something like "harm" or "toxicity", you’re dealing with social constructs. Context matters a lot. And yet, many in the industry treat benchmark scores as the final word. They say, “This model scores highest, so it must be best,” without really questioning what those scores mean.
The challenge is both technical and perceptual. We need to create better tools and educate the industry that trustworthiness is not just a checkbox, it’s a nuanced, ongoing process.
What’s one lesson from your experience you’re applying directly at Vijil?
That trust can't be abstracted away. It has to be designed for a specific context.
My philosophy background taught me that ethics is deeply situational. When I look at AI systems, I try to consider not just the model, but how it’s integrated into the full system. What are the inputs and outputs? What actions can it take? What oversight is there?
That perspective of thinking systemically and contextually is something I bring to everything I do at Vijil.
What’s a widely held belief about AI that you question?
People often talk about “AI” like it’s one thing. But it’s not, it’s many different things, with very different risks.
A large language model used in a search tool is not the same as one controlling physical systems. A vision model used for medical imaging has different risks than a chatbot used for customer service.
When people say “AI is good” or “AI is dangerous,” without specifying which system, how it’s deployed, and what it’s connected to, the conversation becomes meaningless. We need to move beyond generalizations and start talking about AI in context.
What’s something about you that might surprise others?
Even though I work in AI, I’m trying to disconnect from technology more in my personal life.
I noticed I was checking my phone too much, so I bought a non-smart watch, just something to tell the time without pulling out my phone and getting sucked into everything else.
I also uninstalled social media from my phone. I can still check it from my computer, but I want that interaction to be more intentional. Ironically, working in AI has made me more cautious about how I use it in my daily life.
👉 Want to learn more about Vijil and how we’re delivering mission-critical AI agents? Check out our website to learn more about the team, and set up a time to chat with us!