Stop “vibe testing” your LLMs. It's time for real evals.

2025년 8월 27일
Sara Wiltberger Senior Product Manager Google Labs

Link to Youtube Video (visible only when JS is disabled)

If you’re building with LLMs, you know the drill. You tweak a prompt, run it a few times, and... the output feels better. But is it actually better? You're not sure. So you keep tweaking, caught in a loop of “vibe testing” that feels more like art than engineering.

This uncertainty exists for a simple reason: unlike traditional software, AI models are non-deterministic. They don’t always give the same output for the same input, which can make your usual unit tests insufficient for knowing if a change actually made things better. On top of that, you may need to wrangle datasets, manage API calls, parse outputs, and build a whole evaluation pipeline before you can even start testing.

To move past the vibes, we built Stax, an experimental developer tool designed to take the headache out of LLM evaluation. We’ve leveraged the evaluation expertise from Google DeepMind and the experimental innovation from Google Labs to streamline the LLM evaluation lifecycle.


Why evals?

To know if your AI application really works for a specific use case, you need your own AI evals. General benchmarks are generic tests measuring the model across a range of tasks, but they don’t help you evaluate your AI stack based on your data and criteria.

Done right, evals can be a key differentiator, letting you codify your unique goal into your own reusable benchmark. Instead of spending hours “vibe testing” every time you try a new model or tweak a prompt, evals give you clearer metrics to help understand what’s actually better.


The evolution of evaluation methodologies

We need real, repeatable evaluations. In the world of LLMs, that generally means two things: humans and… more LLMs.

1: Humans: Getting humans to label outputs is the gold standard, but it can be slow and expensive.

2: Autoraters (aka LLM-as-a-judge): This is where things get interesting. We can use a powerful LLM (like Gemini models) with a specific set of instructions to “grade” the output of another model. It's a scalable, consistent way to help check for things like relevance, tone, or factual accuracy.


How Stax works

Stax can empower you to rigorously test your “AI stack” to help make data-driven decisions. This helps you gain more confidence when selecting and shipping AI systems for your use case.


Bring your data (or build it)

Got a CSV of test cases? Awesome, just upload it. Need to build a dataset from scratch? Our interface can make it easier to create a set of prompts and ground-truth answers that reflect your actual use case.


Out-of-the-box autoraters

Don't want to reinvent the wheel? We've got you covered. Spin up pre-built autoraters to help check for common stuff like coherence, factuality, and conciseness. You can get meaningful results in minutes.


Build your
own autorater

This is the killer feature. Because there is no one-size-fits-all eval to tell you if your AI matches your company’s brand voice or adheres to the specific rules of your application, Stax lets you easily define your own criteria and build a custom autorater.

  • Need your chatbot to be “helpful but not overly chatty”? Build a rater for that.

  • Need to ensure your summarizer never includes PII? Build a rater for that.

  • Need your code generator to match your team’s style guide? You get the idea.

You define the “secret sauce” of what makes an output good for you, and Stax lets you test for it at scale against your entire dataset.


Stop guessing, start evaluating

Stax helps you evolve from the era of crossing your fingers and shipping LLM features. It's time to treat them like any other part of your production stack: with rigorous testing and robust tooling. Stax helps you understand, iterate, and improve your LLM-powered features. Start evaluating at stax.withgoogle.com and join our Discord channel to let us know what you think.