LogoLogo
AboutBlogLaunch app ↗
v0.22.x
v0.22.x
  • Get Started
  • Overview
  • Getting Access to Distributional
  • Install the Python SDK
  • Quickstart
  • Learning about Distributional
    • Distributional Concepts
    • Why We Test Data Distributions
    • The Flow of Data
  • Using Distributional
    • Projects
    • Runs
      • Reporting Runs
      • Setting a Baseline Run
    • Metrics
    • Tests
      • Creating Tests
        • Using Filters in Tests
        • Available Statistics and Assertions
      • Running Tests
      • Reviewing Tests
        • What Is a Similarity Index?
    • Notifications
    • Access Controls
      • Organization and Namespaces
      • Users and Permissions
      • Tokens
  • Platform
    • Sandbox
    • Self-hosted
      • Architecture
      • Deployment
        • Helm Chart
        • Terraform Module
      • Networking
      • OIDC Authentication
      • Data Security
  • Reference
    • Query Language
      • Functions
    • Python SDK
      • dbnl
      • dbnl.util
      • dbnl.experimental
      • Classes
      • Eval Module
        • Quick Start
        • dbnl.eval
        • dbnl.eval.metrics
        • Application Metric Sets
        • How-To / FAQ
        • LLM-as-judge and Embedding Metrics
        • RAG / Question Answer Example
    • CLI
  • Versions
    • Release Notes
Powered by GitBook

© 2025 Distributional, Inc. All Rights Reserved.

On this page
  • Adaptive Testing Workflow
  • Integrating Distributional with Other Tools

Was this helpful?

Export as PDF

Overview

Distributional's adaptive testing platform

NextGetting Access to Distributional

Last updated 2 months ago

Was this helpful?

Distributional is an adaptive testing platform purpose-built for AI applications. It enables you to test AI application data at scale to define, understand, and improve your definition of AI behavior to ensure consistency and stability over time.

For access to the Distributional platform, please reach out to our team.


Adaptive Testing Workflow

Define Desired Behavior Automatically create a behavioral fingerprint from the app’s runtime logs and any existing development metrics, and generate associated tests to detect changes in that behavior over time.

Understand Changes in Behavior Get alerted when there are changes to app behavior, understand what is changing, and pinpoint at any level of depth what is causing the change to quickly take appropriate action.

Improve Based on Changes Easily add, remove, or recalibrate tests over time so you always have a dynamic representation of desired state that you can use to test new models, roll out new upgrades, or accelerate new app development.


Integrating Distributional with Other Tools

Distributional’s platform is designed to easily integrate with your existing infrastructure, including data stores, orchestrators, alerting tools, and AI platforms. If you are already using a model evaluation framework as part of app development, those can be used as an input to further define behavior in Distributional.

Ready to start using Distributional? Head straight to our Quick Start to get set up on the platform and start testing your AI application.

Adaptive testing workflow in Distributional
Integrate Distributional with your existing infrastructure