SDK Functions
convert_otlp_traces_data
convert_otlp_traces_datadbnl.convert_otlp_traces_data(data: pd.Series[Any],
format: Literal['otlp_json',
'otlp_proto'] | None = None
) → pd.Series[Any]Converts a Series of OTLP TracesData to DBNL spans.
Parameters:
**data**– Series of OTLP TracesDataformat– OTLP TracesData format (otlp_jsonorotlp_proto) orNoneto infer from data
Returns: Series of spans data
create_llm_model
create_llm_modeldbnl.create_llm_model(*,
name: str,
description: str | None = None,
type: Literal['completion',
'embedding'] | None = 'completion',
provider: str,
model: str,
params: dict[str,
Any] = {}
) → LLMModelCreate an LLM Model.
Parameters:
**name**– Model namedescription– Model description, defaults toNonetype– Model type (e.g. completion or embedding), defaults to “completion”provider– Model provider (e.g. openai, bedrock, etc.)model– Model (e.g. gpt-4, gpt-3.5-turbo, etc.)params– Model provider parameters (e.g. api key), defaults to{}
Returns: LLM Model
create_metric
create_metricdbnl.create_metric(*,
project: Project,
name: str,
expression_template: str,
description: str | None = None,
greater_is_better: bool | None = None
) → MetricCreate a new Metric
Parameters:
name– Name for the Metricexpression_template– Expression template string e.g.token_count({RUN}.question)description– Optional description of what computation the metric is performinggreater_is_better– Flag indicating whether greater values are semantically ‘better’ than lesser values
Raises:
DBNLNotLoggedInError– dbnl SDK is not logged in. Seelogin.DBNLInputValidationError– Input does not conform to expected format
Returns: Created Metric
create_project
create_projectdbnl.create_project(*,
name: str,
description: str | None = None,
schedule: Literal['daily',
'hourly'] | None = 'daily',
default_llm_model_id: str | None = None,
default_llm_model_name: str | None = None,
template: Literal['default'] | None = 'default'
) → ProjectCreate a new Project
Returns: Project
Examples:
import dbnl
dbnl.login()
proj_1 = dbnl.create_project(name="test_p1")
# DBNLConflictingProjectError: A Project with name test_p1 already exists.
proj_2 = dbnl.create_project(name="test_p1")delete_llm_model
delete_llm_modeldbnl.delete_llm_model(*,
llm_model_id: str
) → NoneDelete an LLM Model by id.
delete_metric
delete_metricdbnl.delete_metric(*,
metric_id: str
) → NoneDelete a Metric by ID
Parameters:
metric_id– ID of the metric to delete
Raises:
DBNLNotLoggedInError– dbnl SDK is not logged in. Seelogin.DBNLAPIValidationError– dbnl API failed to validate the request
Returns:
None
get_llm_model
get_llm_modeldbnl.get_llm_model(*,
llm_model_id: str
) → LLMModelGet an LLM Model by id.
Parameters:
llm_model_id– Model id
Returns: LLM Model if found
get_llm_model_by_name
get_llm_model_by_namedbnl.get_llm_model_by_name(*,
name: str
) → LLMModelGet an LLM Model by name
get_or_create_llm_model
get_or_create_llm_modeldbnl.get_or_create_llm_model(*,
name: str,
description: str | None = None,
type: Literal['completion',
'embedding'] | None = 'completion',
provider: str,
model: str,
params: dict[str,
Any] = {}
) → LLMModelGet an LLM Model by name, or create it if it does not exist.
Parameters:
**name**– Model namedescription– Model description, defaults toNonetype– Model type (e.g. completion or embedding), defaults to “completion”provider– Model provider (e.g. openai, bedrock, etc.)model– Model (e.g. gpt-4, gpt-3.5-turbo, etc.)params– Model provider parameters (e.g. api key), defaults to{}
Returns: Model
get_or_create_project
get_or_create_projectdbnl.get_or_create_project(*,
name: str,
description: str | None = None,
schedule: Literal['daily',
'hourly'] | None = 'daily',
default_llm_model_id: str | None = None,
default_llm_model_name: str | None = None,
template: Literal['default'] | None = 'default'
) → ProjectGet the Project with the specified name or create a new one if it does not exist
Raises:
DBNLNotLoggedInError– dbnl SDK is not logged in. Seelogin.DBNLAPIValidationError– dbnl API failed to validate the request
Returns: Newly created or matching existing Project
Examples:
import dbnl
dbnl.login()
proj_1 = dbnl.create_project(name="test_p1")
proj_2 = dbnl.get_or_create_project(name="test_p1")
# Calling get_or_create_project will yield same Project object
assert proj_1.id == proj_2.idget_project
get_projectdbnl.get_project(*,
name: str
) → ProjectRetrieve a Project by name.
Examples:
import dbnl
dbnl.login()
proj_1 = dbnl.create_project(name="test_p1")
proj_2 = dbnl.get_project(name="test_p1")
# Calling get_project will yield same Project object
assert proj_1.id == proj_2.id
# DBNLProjectNotFoundError: A dnnl Project with name not_exist does not exist
proj_3 = dbnl.get_project(name="not_exist")log
logdbnl.log(*,
project_id: str,
data: DataFrame,
data_start_time: datetime,
data_end_time: datetime,
wait_timeout: float | None = 600
) → NoneLog data for a date range to a project.
Parameters:
**project**– The Project id to send the logs to.data– Pandas DataFrame with the data.data_start_time– Data start date.data_end_time– Data end time.wait_timeout– If set, the function will block for up towait_timeoutseconds until the data is done processing, defaults to 10 minutes.
Raises:
DBNLNotLoggedInError– dbnl SDK is not logged in. Seelogin.DBNLInputValidationError– Input does not conform to expected format
Examples:
from datetime import datetime, UTC
import dbnl
import pandas as pd
dbnl.login()
proj = dbnl.get_or_create_project(name="test")
test_data = pd.DataFrame(
{"timestamp": [datetime(2025, 1, 1, 11, 39, 53, tzinfo=UTC)]}
{"input": ["Hello"]}
{"output": ["World"]}
)
run = dbnl.log(
project=proj,
column_data=test_data,
data_start_time=datetime(2025, 1, 1, tzinfo=UTC),
data_end_time=datetime(2025, 1, 2, tzinfo=UTC),
)login
logindbnl.login(*,
api_token: str | None = None,
api_url: str | None = None,
app_url: str | None = None,
verify: bool = True
) → NoneSetup dbnl SDK to make authenticated requests. After login is run successfully, the dbnl client will be able to issue secure and authenticated requests against hosted endpoints of the dbnl service.
Parameters:
**api_token**– dbnl API token for authentication; token can be found at/tokenspage of the dbnl app. IfNoneis provided, the environment variableDBNL_API_TOKENwill be used by default.namespace_id– The namespace ID to use for the session.api_url– The base url of the Distributional API. By default, this is set tolocalhost:8080/api,for sandbox users. For other users, please contact your sys admin. IfNoneis provided, the environment variableDBNL_API_URLwill be used by default.app_url– An optional base url of the Distributional app. If this variable is not set, the app url is inferred from theDBNL_API_URLvariable. For on-prem users, please contact your sys admin if you cannot reach the Distributional UI.
update_llm_model
update_llm_modeldbnl.update_llm_model(*,
llm_model_id: str,
name: str | None = None,
description: str | None = None,
model: str | None = None,
params: dict[str,
Any] | None = None
) → LLMModelUpdate an LLM Model by id.
Parameters:
**llm_model_id**– Model idname– Model namedescription– Model description, defaults toNonemodel– Model (e.g. gpt-4, gpt-3.5-turbo, etc.)params– Model provider parameters (e.g. api key), defaults to{}
Returns: Updated LLM Model
Was this helpful?

