LogoLogo
AboutBlogLaunch app ↗
v0.20.x
v0.20.x
  • Introduction to AI Testing
  • Welcome to Distributional
  • Motivation
  • What is AI Testing?
  • Stages in the AI Software Development Lifecycle
    • Components of AI Testing
  • Distributional Testing
  • Getting Access to Distributional
  • Learning about Distributional
    • The Distributional Framework
    • Defining Tests in Distributional
      • Automated Production test creation & execution
      • Knowledge-based test creation
      • Comprehensive testing with Distributional
    • Reviewing Test Sessions and Runs in Distributional
      • Reviewing and recalibrating automated Production tests
      • Insights surfaced elsewhere on Distributional
      • Notifications
    • Data in Distributional
      • The flow of data
      • Components and the DAG for root cause analysis
      • Uploading data to Distributional
      • Living in your VPC
  • Using Distributional
    • Getting Started
    • Access
      • Organization and Namespaces
      • Users and Permissions
      • Tokens
    • Data
      • Data Objects
      • Run-Level Data
      • Data Storage Integrations
      • Data Access Controls
    • Testing
      • Creating Tests
        • Test Page
        • Test Drawer Through Shortcuts
        • Test Templates
        • SDK
      • Defining Assertions
      • Production Testing
        • Auto-Test Generation
        • Recalibration
        • Notable Results
        • Dynamic Baseline
      • Testing Strategies
        • Test That a Given Distribution Has Certain Properties
        • Test That Distributions Have the Same Statistics
        • Test That Columns Are Similarly Distributed
        • Test That Specific Results Have Matching Behavior
        • Test That Distributions Are Not the Same
      • Executing Tests
        • Manually Running Tests Via UI
        • Executing Tests Via SDK
      • Reviewing Tests
      • Using Filters
        • Filters in the Compare Page
        • Filters in Tests
    • Python SDK
      • Quick Start
      • Functions
        • login
        • Project
          • create_project
          • copy_project
          • export_project_as_json
          • get_project
          • get_or_create_project
          • import_project_from_json
        • Run Config
          • create_run_config
          • get_latest_run_config
          • get_run_config
          • get_run_config_from_latest_run
        • Run Results
          • get_column_results
          • get_scalar_results
          • get_results
          • report_column_results
          • report_scalar_results
          • report_results
        • Run
          • close_run
          • create_run
          • get_run
          • report_run_with_results
        • Baseline
          • create_run_query
          • get_run_query
          • set_run_as_baseline
          • set_run_query_as_baseline
        • Test Session
          • create_test_session
      • Objects
        • Project
        • RunConfig
        • Run
        • RunQuery
        • TestSession
        • TestRecalibrationSession
        • TestGenerationSession
        • ResultData
      • Experimental Functions
        • create_test
        • get_tests
        • get_test_sessions
        • wait_for_test_session
        • get_or_create_tag
        • prepare_incomplete_test_spec_payload
        • create_test_recalibration_session
        • wait_for_test_recalibration_session
        • create_test_generation_session
        • wait_for_test_generation_session
      • Eval Module
        • Quick Start
        • Application Metric Sets
        • How-To / FAQ
        • LLM-as-judge and Embedding Metrics
        • RAG / Question Answer Example
        • Eval Module Functions
          • Index of functions
          • eval
          • eval.metrics
    • Notifications
    • Release Notes
  • Tutorials
    • Instructions
    • Hello World (Sentiment Classifier)
    • Trading Strategy
    • LLM Text Summarization
      • Setting the Scene
      • Prompt Engineering
      • Integration testing for text summarization
      • Practical considerations
Powered by GitBook

© 2025 Distributional, Inc. All Rights Reserved.

On this page
  • Parameters
  • Column Schema
  • Components DAG
  • Returns
  • Examples
  • Basic Usage
  • RunConfig with DAG
  • RunConfig with row_id

Was this helpful?

Export as PDF
  1. Using Distributional
  2. Python SDK
  3. Functions
  4. Run Config

create_run_config

Create a new dbnl RunConfig

dbnl.create_run_config(
    *,
    project: ,
    columns: list[dict[str, Any]],
    scalars: Optional[list[dict[str, Any]]] = None,
    description: Optional[str] = None,
    display_name: Optional[str] = None,
    row_id: Optional[list[str]] = None,
    components_dag: Optional[dict[str, list[str]]] = None,
) -> :

Parameters

Arguments
Description

project

columns

A list of column schema specs for the uploaded data, required keys name and type, optional key component, description and greater_is_better. type can be int, float, category, boolean, or string. component is a string that indicates the source of the data. e.g. "component" : "sentiment-classifier" or "component" : "fraud-predictor". Specified components must be present in the components_dag dictionary. greater_is_better is a boolean that indicates if larger values are better than smaller ones. False indicates smaller values are better. None indicates no preference. Example:

columns=[{"name": "pred_proba", "type": "float", "component": "fraud-predictor"}, {"name": "decision", "type": "boolean", "component": "threshold-decision"}, {"name": "requests", "type": "string", "description": "curl request response msg"}]

scalars

NOTE: scalars is available in SDK v0.0.15 and above. A list of scalar schema specs for the uploaded data, required keys name and type, optional key component, description and greater_is_better. type can be int, float, category, boolean, or string. component is a string that indicates the source of the data. e.g. "component" : "sentiment-classifier" or "component" : "fraud-predictor". Specified components must be present in the components_dag dictionary. greater_is_better is a boolean that indicates if larger values are better than smaller ones. False indicates smaller values are better. None indicates no preference. An example RunConfig scalars: scalars=[{"name": "accuracy", "type": "float", "component": "fraud-predictor"}, {"name": "error_type", "type": "category"}] Scalar schema is identical to column schema.

description

An optional description of the RunConfig, defaults to None. Descriptions are limited to 255 characters.

display_name

An optional display name of the RunConfig, defaults to None. Display names do not have to be unique.

row_id

An optional list of the column names that can be used as unique identifiers, defaults to None.

components_dag

Column Schema

Column Names

Column names can only be alphanumeric characters and underscores.

Supported Types

The following type supported as type in column schema

type
Notes

float

int

boolean

string

Any arbitrary string values. Raw string type columns do not produce any histogram or scatterplot on the web UI.

category

list

Currently only supports list of string values. List type columns do not produce any histogram or scatterplot on the web UI.

Components

Components DAG

The components_dag dictionary specifies the topological layout of the AI/ML app. For each key-value pair, the key represents the source component, and the value is a list of the leaf components. The following code snippet describes the DAG shown above.

components_dags={
    "TweetSource": ["EntityExtractor", "SentimentClassifier"],
    "EntityExtractor": ["TradeRecommender"],
    "SentimentClassifier": ["TradeRecommender"],
    "TradeRecommender": [],
    "Global": [],
}

Returns

Type
Description

A new dbnl RunConfig

Examples

Basic Usage

import dbnl
dbnl.login()


proj = dbnl.get_or_create_project(name="test_p1")
# create a new RunConfig
runcfg1 = dbnl.create_run_config(
    project=proj,
    columns=[
        {"name": "error_type", "type": "category"},
        {"name": "email", "type": "string", "description": "raw email text content from source"},
        {"name": "spam-pred", "type": "boolean"},
    ],
    display_name="Basic RunConfig for spam prediction",
)

RunConfig with DAG

import dbnl
dbnl.login()


proj = dbnl.get_or_create_project(name="test_p1")
# create a new RunConfig with DAG
runcfg1 = dbnl.create_run_config(
    project=proj,
    columns=[
        {"name": "error_type", "type": "category"},
        {"name": "email", "type": "string", "component": "data_source", "description": "raw email text content from source"},
        {"name": "spam-pred", "type": "boolean", "component": "spam_classifier"},
    ],
    display_name="Basic RunConfig for spam prediction",
    components_dag={
        "data_source": ["spam_classifier"]
        "spam_classifier": []
)

RunConfig with row_id

import dbnl
dbnl.login()


proj = dbnl.get_or_create_project(name="test_p1")
# create a new RunConfig
runcfg1 = dbnl.create_run_config(
    project=proj,
    columns=[
        {"name": "error_type", "type": "category"},
        {"name": "email", "type": "string", "description": "raw email text content from source"},
        {"name": "spam-pred", "type": "boolean"},
        {"name": "email_id", "type": "string", "description": "unique id for each email"},
    ],
    display_name="Basic RunConfig for spam prediction",
    row_id=["email_id"],
)

RunConfig with scalars

import dbnl
dbnl.login()


proj = dbnl.get_or_create_project(name="test_p1")
# create a new RunConfig
runcfg1 = dbnl.create_run_config(
    project=proj,
    columns=[
        {"name": "error_type", "type": "category"},
        {"name": "email", "type": "string", "description": "raw email text content from source"},
        {"name": "spam-pred", "type": "boolean"},
        {"name": "email_id", "type": "string", "description": "unique id for each email"},
    ],
    scalars=[
        {"name": "model_F1", "type": "float"},
        {"name": "model_recall", "type": "float"},
    ],
    display_name="Basic RunConfig for spam prediction",
)

PreviousRun ConfigNextget_latest_run_config

Was this helpful?

The this RunConfig is associated with.

See the section below for more information.

An optional dictionary representing the direct acyclic graph (DAG) of the specified components, defaults to None. Every component listed in the columns schema must be present in the components_dag. Example: components_dag={"fraud-predictor": ["threshold-decision"], 'threshold-decision': []} See the section below for more information.

Equivalent of pandas . Currently only supports category of string values.

The optional component key is for specifying the source of the data column in relationship to the AI/ML app subcomponents. Components are used in visualizing the .

components DAG
Project
categorical data type
RunConfig
column schema
components DAG
An example components DAG