LogoLogo
AboutBlogLaunch app ↗
v0.20.x
v0.20.x
  • Introduction to AI Testing
  • Welcome to Distributional
  • Motivation
  • What is AI Testing?
  • Stages in the AI Software Development Lifecycle
    • Components of AI Testing
  • Distributional Testing
  • Getting Access to Distributional
  • Learning about Distributional
    • The Distributional Framework
    • Defining Tests in Distributional
      • Automated Production test creation & execution
      • Knowledge-based test creation
      • Comprehensive testing with Distributional
    • Reviewing Test Sessions and Runs in Distributional
      • Reviewing and recalibrating automated Production tests
      • Insights surfaced elsewhere on Distributional
      • Notifications
    • Data in Distributional
      • The flow of data
      • Components and the DAG for root cause analysis
      • Uploading data to Distributional
      • Living in your VPC
  • Using Distributional
    • Getting Started
    • Access
      • Organization and Namespaces
      • Users and Permissions
      • Tokens
    • Data
      • Data Objects
      • Run-Level Data
      • Data Storage Integrations
      • Data Access Controls
    • Testing
      • Creating Tests
        • Test Page
        • Test Drawer Through Shortcuts
        • Test Templates
        • SDK
      • Defining Assertions
      • Production Testing
        • Auto-Test Generation
        • Recalibration
        • Notable Results
        • Dynamic Baseline
      • Testing Strategies
        • Test That a Given Distribution Has Certain Properties
        • Test That Distributions Have the Same Statistics
        • Test That Columns Are Similarly Distributed
        • Test That Specific Results Have Matching Behavior
        • Test That Distributions Are Not the Same
      • Executing Tests
        • Manually Running Tests Via UI
        • Executing Tests Via SDK
      • Reviewing Tests
      • Using Filters
        • Filters in the Compare Page
        • Filters in Tests
    • Python SDK
      • Quick Start
      • Functions
        • login
        • Project
          • create_project
          • copy_project
          • export_project_as_json
          • get_project
          • get_or_create_project
          • import_project_from_json
        • Run Config
          • create_run_config
          • get_latest_run_config
          • get_run_config
          • get_run_config_from_latest_run
        • Run Results
          • get_column_results
          • get_scalar_results
          • get_results
          • report_column_results
          • report_scalar_results
          • report_results
        • Run
          • close_run
          • create_run
          • get_run
          • report_run_with_results
        • Baseline
          • create_run_query
          • get_run_query
          • set_run_as_baseline
          • set_run_query_as_baseline
        • Test Session
          • create_test_session
      • Objects
        • Project
        • RunConfig
        • Run
        • RunQuery
        • TestSession
        • TestRecalibrationSession
        • TestGenerationSession
        • ResultData
      • Experimental Functions
        • create_test
        • get_tests
        • get_test_sessions
        • wait_for_test_session
        • get_or_create_tag
        • prepare_incomplete_test_spec_payload
        • create_test_recalibration_session
        • wait_for_test_recalibration_session
        • create_test_generation_session
        • wait_for_test_generation_session
      • Eval Module
        • Quick Start
        • Application Metric Sets
        • How-To / FAQ
        • LLM-as-judge and Embedding Metrics
        • RAG / Question Answer Example
        • Eval Module Functions
          • Index of functions
          • eval
          • eval.metrics
    • Notifications
    • Release Notes
  • Tutorials
    • Instructions
    • Hello World (Sentiment Classifier)
    • Trading Strategy
    • LLM Text Summarization
      • Setting the Scene
      • Prompt Engineering
      • Integration testing for text summarization
      • Practical considerations
Powered by GitBook

© 2025 Distributional, Inc. All Rights Reserved.

On this page

Was this helpful?

Export as PDF

Welcome to Distributional

Introduction to Distributional AI Testing Platform

NextMotivation

Was this helpful?

At Distributional, we make AI testing easy, so you can build with confidence. Here’s how it works:

  1. ✅ Connect to your existing data sources.

  2. 🔄 Run automated tests on a regular schedule.

  3. 📢 Get alerts when your AI application needs your attention.

Simple, seamless, and built for peace of mind. Let's help you improve your AI uptime.

For getting access to the Distributional platform, please reach out to our team.


AI Test Cases

When do you need AI Testing? To get a sense of what testing could look like in practice, here are some questions that you can answer through AI Testing across the AI Software Development Lifecycle:

During Development
During Deployment
During Production
  • How well do my evals map to production behavior?

  • When do I increase coverage in my golden dataset to address edge cases?

  • If something changes in one component of my application, how do I assess the cascading impact to other dependent components?

  • Are end-users catching issues that my evals miss?

  • How do I compare new application updates to prior ones in my production environment?

  • What’s the impact to behavior if the model, data, or usage shifts?

  • Do I know when shifts happen?

  • How do I understand what’s causing anomalous behavior?

  • What happens if I switch to another LLM?

  • How do I give other teams visibility into shifts that happen?

  • What’s the impact of pushing any change to my application? Am I able to push changes to production or is it a new dev cycle?

If you are interested in finding the answers to the above, Distributional can help. The Distributional platform provides a standardized approach to AI testing across all of your applications.

Ready to start using Distributional? Head straight to Getting Started to get set up on the platform and start testing your AI application.

If you would first like to learn more about the Distributional platform, head over to the Learning About Distributional section. If you are new to AI Testing or would like to know how it fits in to the AI Software Development Cycle, continue forward in the Introduction to AI Testing section, starting with the Motivation and What is AI Testing pages.

Welcome to Distributional