# SDK Functions

### `convert_otlp_traces_data`

```python
dbnl.convert_otlp_traces_data(data: pd.Series[Any],
	format: Literal['otlp_json',
	'otlp_proto'] | None = None
) → pd.Series[Any]
```

Converts a Series of OTLP TracesData to a Series of DBNL spans matching the [DBNL semantic convention](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/\[https:/docs.dbnl.com/configuration/dbnl-semantic-convention]\(https:/docs.dbnl.com/configuration/dbnl-semantic-convention\)/README.md).

The resulting Series can be used as is to fill the spans column of a DataFrame to be logged with the dbnl.log function.

For a complete specification of the TracesData format, see the [OTLP specification](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/\[https:/github.com/open-telemetry/opentelemetry-proto/blob/e5a5dc1f2c7f9e0eefbe061f1d5c09c67722f4d6/opentelemetry/proto/trace/v1/trace.proto#L38-L45]\(https://github.com/open-telemetry/opentelemetry-proto/blob/e5a5dc1f2c7f9e0eefbe061f1d5c09c67722f4d6/opentelemetry/proto/trace/v1/trace.proto#L38-L45\))

* **Parameters:**
  * **`data`** – Series of OTLP TracesData
  * **`format`** – OTLP TracesData format (`otlp_json` or `otlp_proto`) or `None` to infer from data
* **Returns:** Series of spans data

### `create_llm_model`

```python
dbnl.create_llm_model(*,
	name: str,
	description: str | None = None,
	type: Literal['completion',
	'embedding'] | None = 'completion',
	provider: str,
	model: str,
	params: dict[str,
	Any] = {}
) → LLMModel
```

Create an LLM Model.

* **Parameters:**
  * **`name`** – Model name
  * **`description`** – Model description, defaults to `None`
  * **`type`** – Model type (e.g. completion or embedding), defaults to “completion”
  * **`provider`** – Model provider (e.g. openai, bedrock, etc.)
  * **`model`** – Model (e.g. gpt-4, gpt-3.5-turbo, etc.)
  * **`params`** – Model provider parameters (e.g. api key), defaults to `{}`
* **Returns:** LLM Model

### `create_metric`

```python
dbnl.create_metric(*,
	project: Project,
	name: str,
	expression: str,
	description: str | None = None,
	greater_is_better: bool | None = None
) → Metric
```

Create a new Metric

* **Parameters:**
  * **`project`** – The [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) to create the [Metric](https://docs.dbnl.com/v0.30.x/reference/classes#Metric) for
  * **`name`** – Name for the Metric
  * **`expression`** – Expression string e.g. `token_count(traces.question)`
  * **`description`** – Optional description of what computation the metric is performing
  * **`greater_is_better`** – Flag indicating whether greater values are semantically ‘better’ than lesser values
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNLInputValidationError`** – Input does not conform to expected format
* **Returns:** Created Metric

### `create_project`

```python
dbnl.create_project(*,
	name: str,
	description: str | None = None,
	schedule: Literal['daily',
	'hourly'] | None = 'daily',
	default_llm_model_id: str | None = None,
	default_llm_model_name: str | None = None,
	template: Literal['default'] | None = 'default'
) → Project
```

Create a new Project

* **Parameters:**
  * **`name`** – Name for the Project
  * **`description`** – Description for the [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project), defaults to `None`. Description is limited to 255 characters.
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNLAPIValidationError`** – dbnl API failed to validate the request
  * **`DBNLConflicting[Project](classes.md#Project)Error`** – [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) with the same name already exists
* **Returns:** Project

#### Examples:

```python
import dbnl

dbnl.login()

proj_1 = dbnl.create_project(name="test_p1")

# DBNLConflictingProjectError: A Project with name test_p1 already exists.
proj_2 = dbnl.create_project(name="test_p1")
```

### `delete_llm_model`

```python
dbnl.delete_llm_model(*,
	llm_model_id: str
) → None
```

Delete an [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) by id.

* **Parameters:**
  * **`llm_model_id`** – [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) id
* **Returns:** [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) if found

### `delete_metric`

```python
dbnl.delete_metric(*,
	metric_id: str
) → None
```

Delete a [Metric](https://docs.dbnl.com/v0.30.x/reference/classes#Metric) by ID

* **Parameters:**
  * **`metric_id`** – ID of the metric to delete
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNLAPIValidationError`** – dbnl API failed to validate the request
* **Returns:** `None`

### `flatten_otlp_traces_data`

```python
dbnl.flatten_otlp_traces_data(data: pd.Series[Any],
	format: Literal['otlp_json',
	'otlp_proto'] | None = None
) → DataFrame
```

Flattens a Series of OTLP TracesData to a DataFrame matching the [DBNL semantic convention](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/\[https:/docs.dbnl.com/configuration/dbnl-semantic-convention]\(https:/docs.dbnl.com/configuration/dbnl-semantic-convention\)/README.md).

The resulting DataFrame can be used as is to be logged with the dbnl.log function and will included all minimally required columns (timestamp, input, output) as well as the spans column for further flattening server-side.

For a complete specification of the TracesData format, see the [OTLP specification](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/\[https:/github.com/open-telemetry/opentelemetry-proto/blob/e5a5dc1f2c7f9e0eefbe061f1d5c09c67722f4d6/opentelemetry/proto/trace/v1/trace.proto#L38-L45]\(https://github.com/open-telemetry/opentelemetry-proto/blob/e5a5dc1f2c7f9e0eefbe061f1d5c09c67722f4d6/opentelemetry/proto/trace/v1/trace.proto#L38-L45\))

* **Parameters:**
  * **`data`** – Series of OTLP TracesData
  * **`format`** – OTLP TracesData format (`otlp_json` or `otlp_proto`) or `None` to infer from data
* **Returns:** DataFrame with columns timestamp, input, output, spans

### `get_llm_model`

```python
dbnl.get_llm_model(*,
	llm_model_id: str
) → LLMModel
```

Get an [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) by id.

* **Parameters:**
  * **`llm_model_id`** – Model id
* **Returns:** [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) if found

### `get_llm_model_by_name`

```python
dbnl.get_llm_model_by_name(*,
	name: str
) → LLMModel
```

Get an [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) by name

* **Parameters:**
  * **`name`** – [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) name
* **Returns:** [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) if found

### `get_or_create_llm_model`

```python
dbnl.get_or_create_llm_model(*,
	name: str,
	description: str | None = None,
	type: Literal['completion',
	'embedding'] | None = 'completion',
	provider: str,
	model: str,
	params: dict[str,
	Any] = {}
) → LLMModel
```

Get an [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) by name, or create it if it does not exist.

* **Parameters:**
  * **`name`** – Model name
  * **`description`** – Model description, defaults to `None`
  * **`type`** – Model type (e.g. completion or embedding), defaults to “completion”
  * **`provider`** – Model provider (e.g. openai, bedrock, etc.)
  * **`model`** – Model (e.g. gpt-4, gpt-3.5-turbo, etc.)
  * **`params`** – Model provider parameters (e.g. api key), defaults to `{}`
* **Returns:** Model

### `get_or_create_project`

```python
dbnl.get_or_create_project(*,
	name: str,
	description: str | None = None,
	schedule: Literal['daily',
	'hourly'] | None = 'daily',
	default_llm_model_id: str | None = None,
	default_llm_model_name: str | None = None,
	template: Literal['default'] | None = 'default'
) → Project
```

Get the [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) with the specified name or create a new one if it does not exist

* **Parameters:**
  * **`name`** – Name for the Project
  * **`description`** – Description for the [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project), defaults to `None`
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNLAPIValidationError`** – dbnl API failed to validate the request
* **Returns:** Newly created or matching existing Project

#### Examples:

```python
import dbnl

dbnl.login()

proj_1 = dbnl.create_project(name="test_p1")
proj_2 = dbnl.get_or_create_project(name="test_p1")

# Calling get_or_create_project will yield same Project object
assert proj_1.id == proj_2.id
```

### `get_project`

```python
dbnl.get_project(*,
	project_id: str | None = None,
	name: str | None = None
) → Project
```

Retrieve a [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) by id.

* **Parameters:**
  * **`id`** – The id for the existing Project.
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNL[Project](classes.md#Project)NotFoundError`** – [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) with the given name does not exist.
* **Returns:** Project

#### Examples:

```python
import dbnl

dbnl.login()

proj_1 = dbnl.create_project(name="test_p1")
proj_2 = dbnl.get_project(project_id=proj_1.id)

# Calling get_project will yield same Project object
assert proj_1.id == proj_2.id

# DBNLProjectNotFoundError: A dbnl Project with id not_exist does not exist
proj_3 = dbnl.get_project(project_id="not_exist")
```

### `get_project_by_name`

```python
dbnl.get_project_by_name(*,
	name: str
) → Project
```

Retrieve a [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) by name.

* **Parameters:**
  * **`name`** – The name for the existing Project.
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNL[Project](classes.md#Project)NameNotFoundError`** – [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) with the given name does not exist.
* **Returns:** Project

#### Examples:

```python
import dbnl

dbnl.login()

proj_1 = dbnl.create_project(name="test_p1")
proj_2 = dbnl.get_project_by_name(name="test_p1")

# Calling get_project_by_name will yield same Project object
assert proj_1.id == proj_2.id

# DBNLProjectNameNotFoundError: A dbnl Project with name not_exist does not exist
proj_3 = dbnl.get_project_by_name(name="not_exist")
```

### `log`

```python
dbnl.log(*,
	project_id: str,
	data: DataFrame,
	data_start_time: datetime,
	data_end_time: datetime,
	wait_timeout: float | None = 600
) → None
```

Log data for a date range to a project.

* **Parameters:**
  * **`project`** – The [Project](https://docs.dbnl.com/v0.30.x/reference/classes#Project) id to send the logs to.
  * **`data`** – Pandas DataFrame with the data. See the DBNL Semantic Convetion.
  * **`data_start_time`** – Data start date.
  * **`data_end_time`** – Data end time.
  * **`wait_timeout`** – If set, the function will block for up to `wait_timeout` seconds until the data is done processing, defaults to 10 minutes.
* **Raises:**
  * **`DBNLNotLoggedInError`** – dbnl SDK is not logged in. See [`login`](https://github.com/dbnlAI/docs/blob/main/reference/python-sdk/python-sdk.md#login).
  * **`DBNLInputValidationError`** – Input does not conform to expected format

#### Examples:

```python
from datetime import datetime, UTC

import dbnl
import pandas as pd

dbnl.login()

proj = dbnl.get_or_create_project(name="test")
test_data = pd.DataFrame(
    {"timestamp": [datetime(2025, 1, 1, 11, 39, 53, tzinfo=UTC)]}
    {"input": ["Hello"]}
    {"output": ["World"]}
)

run = dbnl.log(
    project=proj,
    column_data=test_data,
    data_start_time=datetime(2025, 1, 1, tzinfo=UTC),
    data_end_time=datetime(2025, 1, 2, tzinfo=UTC),
)
```

### `login`

```python
dbnl.login(*,
	api_token: str | None = None,
	api_url: str | None = None,
	app_url: str | None = None,
	verify: bool = True
) → None
```

Setup dbnl SDK to make authenticated requests. After login is run successfully, the dbnl client will be able to issue secure and authenticated requests against hosted endpoints of the dbnl service.

* **Parameters:**
  * **`api_token`** – dbnl API token for authentication; token can be found at `/tokens` page of the dbnl app. If `None` is provided, the environment variable `DBNL_API_TOKEN` will be used by default.
  * **`namespace_id`** – The namespace ID to use for the session.
  * **`api_url`** – The base url of the Distributional API. By default, this is set to `localhost:8080/api,` for sandbox users. For other users, please contact your sys admin. If `None` is provided, the environment variable `DBNL_API_URL` will be used by default.
  * **`app_url`** – An optional base url of the Distributional app. If this variable is not set, the app url is inferred from the `DBNL_API_URL` variable. For on-prem users, please contact your sys admin if you cannot reach the Distributional UI.

### `update_llm_model`

```python
dbnl.update_llm_model(*,
	llm_model_id: str,
	name: str | None = None,
	description: str | None = None,
	model: str | None = None,
	params: dict[str,
	Any] | None = None
) → LLMModel
```

Update an [LLM Model](https://docs.dbnl.com/v0.30.x/reference/classes#LLMModel) by id.

* **Parameters:**
  * **`llm_model_id`** – Model id
  * **`name`** – Model name
  * **`description`** – Model description, defaults to `None`
  * **`model`** – Model (e.g. gpt-4, gpt-3.5-turbo, etc.)
  * **`params`** – Model provider parameters (e.g. api key), defaults to `{}`
* **Returns:** Updated LLM Model
