# Functions

<details>

<summary>abs</summary>

```
abs(expr)
```

Returns the absolute value of the input.

</details>

<details>

<summary>add</summary>

```
add(expr1, expr2)
```

Adds the two inputs.

</details>

<details>

<summary>and</summary>

```
and(expr1, expr2)
```

Logical and operation of two boolean columns.

</details>

<details>

<summary>automated_readability_index</summary>

```
automated_readability_index(text)
```

Returns the ARI (Automated Readability Index) which outputs a number that approximates the grade level needed to comprehend the text. For example if the ARI is 6.5, then the grade level to comprehend the text is 6th to 7th grade.

</details>

<details>

<summary>bleu</summary>

```
bleu(output, reference)
```

Computes the BLEU score between two columns.

</details>

<details>

<summary>character_count</summary>

```
character_count(text)
```

Returns the number of characters in a text column.

* Aliases
  * `num_chars`

</details>

<details>

<summary>coalesce</summary>

```
coalesce(expr)
```

Return the first expression that evaluates to a non-null value.

</details>

<details>

<summary>concat</summary>

```
concat(expr)
```

Concatenates multiple text columns into one.

</details>

<details>

<summary>contains</summary>

```
contains(text, text)
```

Returns true if the input string contains the substring.

</details>

<details>

<summary>divide</summary>

```
divide(expr1, expr2)
```

Divides the two inputs.

</details>

<details>

<summary>embed</summary>

```
embed(text)
```

Returns the embedding of a text column. Embedding model: all-mpnet-base-v2.

</details>

<details>

<summary>equal_to</summary>

```
equal_to(expr1, expr2)
```

Computes the element-wise equal to comparison of two columns.

* Aliases
  * `eq`

</details>

<details>

<summary>filter</summary>

```
filter(expr1, expr2)
```

Filters a column using another column as a mask.

</details>

<details>

<summary>flesch_kincaid_grade</summary>

```
flesch_kincaid_grade(text)
```

Returns the Flesch-Kincaid Grade of the given text. This is a grade formula in that a score of 9.3 means that a ninth grader would be able to read the document.

</details>

<details>

<summary>greater_than</summary>

```
greater_than(expr1, expr2)
```

Computes the element-wise greater than comparison of two columns. input1 > input2

* Aliases
  * `gt`

</details>

<details>

<summary>greater_than_or_equal_to</summary>

```
greater_than_or_equal_to(expr1, expr2)
```

Computes the element-wise greater than or equal to comparison of two columns. input1 >= input2

* Aliases
  * `gte`

</details>

<details>

<summary>is_valid_json</summary>

```
is_valid_json(text)
```

Returns true if the input string is valid json.

</details>

<details>

<summary>less_than</summary>

```
less_than(expr1, expr2)
```

Computes the element-wise less than comparison of two columns. input1 < input2

* Aliases
  * `lt`

</details>

<details>

<summary>less_than_or_equal_to</summary>

```
less_than_or_equal_to(expr1, expr2)
```

Computes the element-wise less than or equal to comparison of two columns. input1 <= input2

* Aliases
  * `lte`

</details>

<details>

<summary>levenshtein</summary>

```
levenshtein(output, reference)
```

Returns Damerau-Levenshtein distance between two strings.

</details>

<details>

<summary>list_contains</summary>

```
list_contains(list, value)
```

Returns True if the list contains the value.

</details>

<details>

<summary>list_extract</summary>

```
list_extract(list_expr, index_expr)
```

Extracts the item at the given index from a list.

</details>

<details>

<summary>list_has_duplicate</summary>

```
list_has_duplicate(expr)
```

Returns True if the list has duplicated items.

</details>

<details>

<summary>list_length</summary>

```
list_length(expr)
```

Returns the length of lists in a list column.

</details>

<details>

<summary>list_most_common</summary>

```
list_most_common(expr)
```

Most common item in list.

</details>

<details>

<summary>list_starts_with</summary>

```
list_starts_with(list, prefix)
```

Returns True if the list starts with the value.

</details>

<details>

<summary>list_zip</summary>

```
list_zip(expr)
```

Zips multiple lists into a list of structs.

</details>

<details>

<summary>llm_answer_groundedness</summary>

```
llm_answer_groundedness(model_name, prompt_version, answer, context)
```

Judge if the answer is adhering to the context

</details>

<details>

<summary>llm_answer_refusal</summary>

```
llm_answer_refusal(model_name, prompt_version, answer)
```

Judge if the answer is a refusal to answer the question

</details>

<details>

<summary>llm_answer_relevancy</summary>

```
llm_answer_relevancy(model_name, prompt_version, question, answer)
```

Judge if the answer is relevant to the question

* Aliases
  * `rag_answer_relevancy`

</details>

<details>

<summary>llm_classify</summary>

```
llm_classify(model_name, prompt, classes)
```

Classify text into custom categories using an LLM.

</details>

<details>

<summary>llm_context_relevancy</summary>

```
llm_context_relevancy(model_name, prompt_version, question, context)
```

LLM as Judge if the contexts are relevant to the question

</details>

<details>

<summary>llm_question_clarity</summary>

```
llm_question_clarity(model_name, prompt_version, question)
```

Judge if the question is clear

</details>

<details>

<summary>llm_score</summary>

```
llm_score(model_name, prompt)
```

Score text using an LLM.

</details>

<details>

<summary>llm_summarization</summary>

```
llm_summarization(model_name, prompt_version, input, output)
```

Summarize the input and output of a conversational system.

</details>

<details>

<summary>llm_text_frustration</summary>

```
llm_text_frustration(model_name, prompt_version, text)
```

Judge the frustration of text (default to input) on a scale of 1 to 5.

</details>

<details>

<summary>llm_text_sentiment</summary>

```
llm_text_sentiment(model_name, prompt_version, text)
```

Judge the sentiment of a text as positive, negative, or neutral.

* Aliases
  * `text_sentiment`

</details>

<details>

<summary>llm_text_similarity</summary>

```
llm_text_similarity(model_name, prompt_version, output, reference)
```

Judge the similarity of an output on a scale of 1 to 5, as compared to a target.

* Aliases
  * `text_similarity`

</details>

<details>

<summary>llm_text_toxicity</summary>

```
llm_text_toxicity(model_name, prompt_version, text)
```

Judge the toxicity of a text on a scale of 1 to 5.

</details>

<details>

<summary>map_extract</summary>

```
map_extract(map_expr, key_expr)
```

Extracts the value for a given key from a map, returning null if the key is not in the map.

</details>

<details>

<summary>multiply</summary>

```
multiply(expr1, expr2)
```

Multiplies the two inputs.

</details>

<details>

<summary>negate</summary>

```
negate(expr)
```

Returns the negation of the input.

</details>

<details>

<summary>not</summary>

```
not(expr)
```

Logical not operation of a boolean column.

</details>

<details>

<summary>not_equal_to</summary>

```
not_equal_to(expr1, expr2)
```

Computes the element-wise not equal to comparison of two columns.

* Aliases
  * `neq`

</details>

<details>

<summary>or</summary>

```
or(expr1, expr2)
```

Logical or operation of two boolean columns.

</details>

<details>

<summary>rouge1</summary>

```
rouge1(output, reference)
```

Returns the rouge1 score between two columns.

</details>

<details>

<summary>rouge2</summary>

```
rouge2(output, reference)
```

Returns the rouge2 score between two columns.

</details>

<details>

<summary>rougeL</summary>

```
rougeL(output, reference)
```

Returns the rougeL score between two columns.

</details>

<details>

<summary>rougeLsum</summary>

```
rougeLsum(output, reference)
```

Returns the rougeLsum score between two columns.

</details>

<details>

<summary>sentence_count</summary>

```
sentence_count(text)
```

Returns the number of sentences in a text column.

* Aliases
  * `num_sentences`

</details>

<details>

<summary>struct_extract</summary>

```
struct_extract(struct_expr, field_name)
```

Extracts a field from a struct expression.

</details>

<details>

<summary>subtract</summary>

```
subtract(expr1, expr2)
```

Subtracts the two inputs.

</details>

<details>

<summary>token_count</summary>

```
token_count(text)
```

Returns the number of tokens in a text column.

</details>

<details>

<summary>word_count</summary>

```
word_count(text)
```

Returns the number of words in a text column.

* Aliases
  * `num_words`

</details>
