LogoLogo
AboutBlogLaunch app ↗
v0.23.x
v0.23.x
  • Get Started
  • Overview
  • Getting Access to Distributional
  • Install the Python SDK
  • Quickstart
  • Learning about Distributional
    • Distributional Concepts
    • Why We Test Data Distributions
    • The Flow of Data
  • Using Distributional
    • Projects
    • Runs
      • Reporting Runs
      • Setting a Baseline Run
    • Metrics
    • Tests
      • Creating Tests
        • Using Filters in Tests
        • Available Statistics and Assertions
      • Running Tests
      • Reviewing Tests
        • What Is a Similarity Index?
    • Notifications
    • Access Controls
      • Organization and Namespaces
      • Users and Permissions
      • Tokens
  • Platform
    • Sandbox
    • Self-hosted
      • Architecture
      • Deployment
        • Helm Chart
        • Terraform Module
      • Networking
      • OIDC Authentication
      • Data Security
  • Reference
    • Query Language
      • Functions
    • Python SDK
      • dbnl
      • dbnl.util
      • dbnl.experimental
      • Classes
      • Eval Module
        • Quick Start
        • dbnl.eval
        • dbnl.eval.metrics
        • Application Metric Sets
        • How-To / FAQ
        • LLM-as-judge and Embedding Metrics
        • RAG / Question Answer Example
      • Classes
  • CLI
  • Versions
    • Release Notes
Powered by GitBook

© 2025 Distributional, Inc. All Rights Reserved.

On this page
  • Running Your Tests
  • Choose a Baseline Run
  • Create a Test Session

Was this helpful?

Export as PDF
  1. Using Distributional
  2. Tests

Running Tests

PreviousAvailable Statistics and AssertionsNextReviewing Tests

Last updated 1 month ago

Was this helpful?

You can run any tests you've created (or just the default App Similarity Index test) to investigate the behavior of your application.

Running Your Tests

When you run a Test Session, you are running your tests against a given Experiment Run.

Choose a Baseline Run

If you haven't already, take a look at the documentation on . All the methods for running a test will allow you to choose a Baseline Run at the time of Test Session creation, but you can also .

Create a Test Session

Tests are run within the context of a Test Session, which is effectively just a collection of tests run against an Experiment Run with a Baseline Run. You can create a Test Session, which will immediately run the tests, via the UI or the SDK:

Regardless of how you choose to create your Test Session, you can specify tags to choose a subset of tests to run in that given session. The following options for tags are available:

  • Include Tags: Only tests with any of these tags will be run

  • Exclude Tags: Only tests with none of these tags will be run

  • Required Tags : Only tests with every one of these tags will be run

You can choose to run the tests associated with a Project by clicking on the "Run Tests" button on your Project. This button will open up a modal that allows you to specify the Baseline and Experiment Runs, as well as the tags of the tests you would like to include or exclude from the test session.

Tests can be run via the SDK function . Most likely, you will want to create a Test Session shortly after you've reported and closed a Run. See for more information.

import dbnl
import pandas as pd
dbnl.login()

# More likely, you will use the run reference returned by
# report_run_with_results. See the "Reporting Runs" section in the
# docs (linked above) for more information.
run = dbnl.get_run(run_id="run_abc123")

# See the create_test_session reference documentation (linked above)
# for more options, like overriding the baseline or specifying tags
# to choose a subset of test to run
dbnl.create_test_session(
  experiment_run=run,
)

Continue onto for how to look at and interpret the results from your Test Session.

Reviewing Tests
setting a Baseline Run
Reporting Runs
set a default
create_test_session