# Functions

<details>

<summary>abs</summary>

```
abs(expr)
```

Returns the absolute value of the input.

</details>

<details>

<summary>add</summary>

```
add(expr1, expr2)
```

Adds the two inputs.

</details>

<details>

<summary>and</summary>

```
and(expr1, expr2)
```

Logical and operation of two boolean columns.

</details>

<details>

<summary>character_count</summary>

```
character_count(text)
```

Returns the number of characters in a text column.

* Aliases
  * `num_chars`

</details>

<details>

<summary>coalesce</summary>

```
coalesce(expr)
```

Return the first expression that evaluates to a non-null value.

</details>

<details>

<summary>concat</summary>

```
concat(expr)
```

Concatenates multiple text columns into one.

</details>

<details>

<summary>contains</summary>

```
contains(text, text)
```

Returns true if the input string contains the substring.

</details>

<details>

<summary>count</summary>

```
count(expr)
```

Computes the number of rows in a column.

</details>

<details>

<summary>count_distinct</summary>

```
count_distinct(expr)
```

Computes the number of distinct non-null values in a column.

</details>

<details>

<summary>count_if</summary>

```
count_if(expr)
```

Computes the number of rows in a column that satisfy a condition.

</details>

<details>

<summary>date_trunc</summary>

```
date_trunc(expr1, expr2)
```

Truncates a timestamp to the specified unit.

</details>

<details>

<summary>deterministic_sample</summary>

```
deterministic_sample(expr)
```

Returns a deterministic sample value in \[0, 1) based on the input value.

</details>

<details>

<summary>divide</summary>

```
divide(expr1, expr2)
```

Divides the two inputs.

</details>

<details>

<summary>embed</summary>

```
embed(text)
```

Returns the embedding of a text column. Embedding model: all-mpnet-base-v2.

</details>

<details>

<summary>equal_to</summary>

```
equal_to(expr1, expr2)
```

Computes the element-wise equal to comparison of two columns.

* Aliases
  * `eq`

</details>

<details>

<summary>filter</summary>

```
filter(expr1, expr2)
```

Filters a column using another column as a mask.

</details>

<details>

<summary>greater_than</summary>

```
greater_than(expr1, expr2)
```

Computes the element-wise greater than comparison of two columns. input1 > input2

* Aliases
  * `gt`

</details>

<details>

<summary>greater_than_or_equal_to</summary>

```
greater_than_or_equal_to(expr1, expr2)
```

Computes the element-wise greater than or equal to comparison of two columns. input1 >= input2

* Aliases
  * `gte`

</details>

<details>

<summary>icontains</summary>

```
icontains(text, text)
```

Returns true if the input string contains the substring, ignoring case.

</details>

<details>

<summary>is_valid_json</summary>

```
is_valid_json(text)
```

Returns true if the input string is valid json.

</details>

<details>

<summary>less_than</summary>

```
less_than(expr1, expr2)
```

Computes the element-wise less than comparison of two columns. input1 < input2

* Aliases
  * `lt`

</details>

<details>

<summary>less_than_or_equal_to</summary>

```
less_than_or_equal_to(expr1, expr2)
```

Computes the element-wise less than or equal to comparison of two columns. input1 <= input2

* Aliases
  * `lte`

</details>

<details>

<summary>levenshtein</summary>

```
levenshtein(output, reference)
```

Returns Damerau-Levenshtein distance between two strings.

</details>

<details>

<summary>list_contains</summary>

```
list_contains(list, value)
```

Returns True if the list contains the value.

</details>

<details>

<summary>list_extract</summary>

```
list_extract(list_expr, index_expr)
```

Extracts the item at the given index from a list.

</details>

<details>

<summary>list_has_duplicate</summary>

```
list_has_duplicate(expr)
```

Returns True if the list has duplicated items.

</details>

<details>

<summary>list_length</summary>

```
list_length(expr)
```

Returns the length of lists in a list column.

</details>

<details>

<summary>list_most_common</summary>

```
list_most_common(expr)
```

Most common item in list.

</details>

<details>

<summary>list_starts_with</summary>

```
list_starts_with(list, prefix)
```

Returns True if the list starts with the value.

</details>

<details>

<summary>list_zip</summary>

```
list_zip(expr)
```

Zips multiple lists into a list of structs.

</details>

<details>

<summary>llm_answer_groundedness</summary>

```
llm_answer_groundedness(model_name, prompt_version, answer, context)
```

Classifies whether the generated answer is grounded in and supported by the provided context.

</details>

<details>

<summary>llm_answer_groundedness_with_justification</summary>

```
llm_answer_groundedness_with_justification(model_name, prompt_version, answer, context)
```

Classifies whether the generated answer is grounded in and supported by the provided context.

</details>

<details>

<summary>llm_answer_refusal</summary>

```
llm_answer_refusal(model_name, prompt_version, answer)
```

Classifies whether the model refused to answer the user's question.

</details>

<details>

<summary>llm_answer_refusal_with_justification</summary>

```
llm_answer_refusal_with_justification(model_name, prompt_version, answer)
```

Classifies whether the model refused to answer the user's question.

</details>

<details>

<summary>llm_answer_relevancy</summary>

```
llm_answer_relevancy(model_name, prompt_version, question, answer)
```

Classifies whether the generated answer is relevant and responsive to the user's question.

* Aliases
  * `rag_answer_relevancy`

</details>

<details>

<summary>llm_answer_relevancy_with_justification</summary>

```
llm_answer_relevancy_with_justification(model_name, prompt_version, question, answer)
```

Classifies whether the generated answer is relevant and responsive to the user's question.

</details>

<details>

<summary>llm_classify</summary>

```
llm_classify(model_name, prompt, classes)
```

Classifies text into custom categories you define, using your own prompt and labels.

</details>

<details>

<summary>llm_classify_with_justification</summary>

```
llm_classify_with_justification(model_name, prompt, classes)
```

Classifies text into custom categories you define, using your own prompt and labels.

</details>

<details>

<summary>llm_context_relevancy</summary>

```
llm_context_relevancy(model_name, prompt_version, question, context)
```

Classifies whether the retrieved context is relevant to the user's question.

</details>

<details>

<summary>llm_context_relevancy_with_justification</summary>

```
llm_context_relevancy_with_justification(model_name, prompt_version, question, context)
```

Classifies whether the retrieved context is relevant to the user's question.

</details>

<details>

<summary>llm_conversation_summary</summary>

```
llm_conversation_summary(model_name, prompt_version, conversation)
```

Generates a concise summary of a full conversation session between an AI assistant and a user.

</details>

<details>

<summary>llm_question_clarity</summary>

```
llm_question_clarity(model_name, prompt_version, question)
```

Scores how clear and well-formed a question is, from 1 (ambiguous or incoherent) to 5 (perfectly clear).

</details>

<details>

<summary>llm_question_clarity_with_justification</summary>

```
llm_question_clarity_with_justification(model_name, prompt_version, question)
```

Scores how clear and well-formed a question is, from 1 (ambiguous or incoherent) to 5 (perfectly clear).

</details>

<details>

<summary>llm_score</summary>

```
llm_score(model_name, prompt)
```

Scores text on a 1–5 scale using your own custom evaluation prompt.

</details>

<details>

<summary>llm_score_with_justification</summary>

```
llm_score_with_justification(model_name, prompt)
```

Scores text on a 1–5 scale using your own custom evaluation prompt.

</details>

<details>

<summary>llm_summarization</summary>

```
llm_summarization(model_name, prompt_version, input, output)
```

Generates a concise summary of a single conversational exchange (input and output).

</details>

<details>

<summary>llm_text_frustration</summary>

```
llm_text_frustration(model_name, prompt_version, text)
```

Scores the level of user frustration expressed in a text, from 1 (not frustrated) to 5 (extremely frustrated).

</details>

<details>

<summary>llm_text_frustration_with_justification</summary>

```
llm_text_frustration_with_justification(model_name, prompt_version, text)
```

Scores the level of user frustration expressed in a text, from 1 (not frustrated) to 5 (extremely frustrated).

</details>

<details>

<summary>llm_text_sentiment</summary>

```
llm_text_sentiment(model_name, prompt_version, text)
```

Classifies the overall sentiment of a text as positive, negative, or neutral.

* Aliases
  * `text_sentiment`

</details>

<details>

<summary>llm_text_sentiment_with_justification</summary>

```
llm_text_sentiment_with_justification(model_name, prompt_version, text)
```

Classifies the overall sentiment of a text as positive, negative, or neutral.

</details>

<details>

<summary>llm_text_similarity</summary>

```
llm_text_similarity(model_name, prompt_version, output, reference)
```

Scores how semantically similar an output is to a target reference, from 1 (completely different) to 5 (equivalent).

* Aliases
  * `text_similarity`

</details>

<details>

<summary>llm_text_similarity_with_justification</summary>

```
llm_text_similarity_with_justification(model_name, prompt_version, output, reference)
```

Scores how semantically similar an output is to a target reference, from 1 (completely different) to 5 (equivalent).

</details>

<details>

<summary>llm_text_toxicity</summary>

```
llm_text_toxicity(model_name, prompt_version, text)
```

Scores how toxic or harmful a piece of text is, from 1 (not toxic) to 5 (highly toxic).

</details>

<details>

<summary>llm_text_toxicity_with_justification</summary>

```
llm_text_toxicity_with_justification(model_name, prompt_version, text)
```

Scores how toxic or harmful a piece of text is, from 1 (not toxic) to 5 (highly toxic).

</details>

<details>

<summary>llm_user_frustration</summary>

```
llm_user_frustration(model_name, prompt_version, conversation)
```

Scores the overall user frustration across a conversation session, from 1 (satisfied) to 5 (extremely frustrated).

</details>

<details>

<summary>llm_user_frustration_with_justification</summary>

```
llm_user_frustration_with_justification(model_name, prompt_version, conversation)
```

Scores the overall user frustration across a conversation session, from 1 (satisfied) to 5 (extremely frustrated).

</details>

<details>

<summary>map_extract</summary>

```
map_extract(map_expr, key_expr)
```

Extracts the value for a given key from a map, returning null if the key is not in the map.

</details>

<details>

<summary>max</summary>

```
max(expr)
```

Computes the max of a column.

</details>

<details>

<summary>mean</summary>

```
mean(expr)
```

Computes the mean of a column.

</details>

<details>

<summary>median</summary>

```
median(expr)
```

Computes the median of a column.

</details>

<details>

<summary>min</summary>

```
min(expr)
```

Computes the min of a column.

</details>

<details>

<summary>mode</summary>

```
mode(expr)
```

Computes the mode of a column.

</details>

<details>

<summary>multiply</summary>

```
multiply(expr1, expr2)
```

Multiplies the two inputs.

</details>

<details>

<summary>negate</summary>

```
negate(expr)
```

Returns the negation of the input.

</details>

<details>

<summary>not</summary>

```
not(expr)
```

Logical not operation of a boolean column.

</details>

<details>

<summary>not_equal_to</summary>

```
not_equal_to(expr1, expr2)
```

Computes the element-wise not equal to comparison of two columns.

* Aliases
  * `neq`

</details>

<details>

<summary>or</summary>

```
or(expr1, expr2)
```

Logical or operation of two boolean columns.

</details>

<details>

<summary>percentile</summary>

```
percentile(expr1, expr2)
```

Computes the nth percentile of a column.

</details>

<details>

<summary>rouge1</summary>

```
rouge1(output, reference)
```

Returns the rouge1 score between two columns.

</details>

<details>

<summary>rouge2</summary>

```
rouge2(output, reference)
```

Returns the rouge2 score between two columns.

</details>

<details>

<summary>rougeL</summary>

```
rougeL(output, reference)
```

Returns the rougeL score between two columns.

</details>

<details>

<summary>rougeLsum</summary>

```
rougeLsum(output, reference)
```

Returns the rougeLsum score between two columns.

</details>

<details>

<summary>stddev</summary>

```
stddev(expr)
```

Computes the sample standard deviation of a column.

</details>

<details>

<summary>struct_extract</summary>

```
struct_extract(struct_expr, field_name)
```

Extracts a field from a struct expression.

</details>

<details>

<summary>subtract</summary>

```
subtract(expr1, expr2)
```

Subtracts the two inputs.

</details>

<details>

<summary>sum</summary>

```
sum(expr)
```

Computes the sum of a column.

</details>
