# Status

The Status page shows you all ongoing and previous [DBNL Data Pipeline](https://docs.dbnl.com/v0.29.x/configuration/data-pipeline) runs for your project.

These runs represent the entire [Data Pipeline](https://docs.dbnl.com/v0.29.x/configuration/data-pipeline), including:

* Data ingestion from the specified [Data Connection](https://docs.dbnl.com/v0.29.x/configuration/data-connections) for the [Project](https://docs.dbnl.com/v0.29.x/workflow/projects)
* [Log](https://docs.dbnl.com/v0.29.x/workflow/logs) enrichment by appending [Metrics](https://docs.dbnl.com/v0.29.x/workflow/metrics) using the [Model Connection](https://docs.dbnl.com/v0.29.x/configuration/model-connections)
* Analysis and publishing of [Insights](https://docs.dbnl.com/v0.29.x/workflow/insights)

You can view the current status of each run grouped by data date range, which time window DBNL was ingesting data for. If a Data Pipeline run has errored you can hover over the error status to view the exception and restart the run by clicking on the restart button in the actions column.

## Expected Pipeline Duration

Typical pipeline run times depend on log volume and Model Connection latency:

| Log Volume          | Expected Duration | Notes                        |
| ------------------- | ----------------- | ---------------------------- |
| < 1,000 logs        | 3-7 minutes       | Fast for testing/POC         |
| 1,000-10,000 logs   | 10-30 minutes     | Typical small projects       |
| 10,000-100,000 logs | 30-90 minutes     | Standard production workload |
| > 100,000 logs      | 1-3 hours         | Large-scale deployments      |

**Pipeline stages and their typical durations:**

1. **Ingest** (10-30 seconds): Upload and validate data
2. **Enrich** (60-80% of total time): Compute metrics using Model Connection
3. **Analyze** (10-20% of total time): Run unsupervised learning algorithms
4. **Publish** (30-60 seconds): Update dashboards and generate insights

{% hint style="info" %}
**Enrich is the slowest stage** because it calls your Model Connection for each log. Faster Model Connections (local NVIDIA NIMs) will significantly reduce total pipeline time compared to external APIs.
{% endhint %}

{% hint style="info" %}
The DBNL Data Pipeline contains many different tasks and can be complex to debug. Please reach out to us at <support@distributional.com> or [distributional.com/contact](https://distributional.com/contact) and we would be happy to help.
{% endhint %}

<figure><img src="https://content.gitbook.com/content/lUoirJaFEHofsQHmOtdL/blobs/9dDO9LAobg1FP6Oxt7Tg/image.png" alt=""><figcaption></figcaption></figure>
