Metrics
ffTRF scores predictions column-wise. In other words, each metric function
expects observed and predicted arrays with shape (n_samples, n_outputs) and
returns one score per output column before any optional averaging.
These metrics are used for:
predict(..., response=...)andscore(...)- choosing the best regularization value during cross-validation
They are not alternative fitting objectives. The TRF itself is always fitted with the same ridge-regularized spectral solver.
Built-In Metrics
pearsonr: default correlation-based scorer2_score: coefficient of determinationexplained_variance_score: variance-based goodness of fitneg_mse: mTRF-compatible negative MSE where larger values are betteravailable_metrics(): list built-in metric names accepted byTRF(metric=...)
Custom Metrics
You can also pass your own callable to TRF(metric=...). A custom metric must:
- accept
(y_true, y_pred) - return one score per output column
- use "larger is better" semantics if you want cross-validation to pick the best value sensibly
For compatibility with mTRF, ffTRF.neg_mse follows the same "negative MSE"
convention: larger values are still better during cross-validation, even
though the underlying quantity is the mean squared error.
fftrf.available_metrics()
Return the names of built-in scoring metrics.
Returns:
| Type | Description |
|---|---|
tuple of str
|
Sorted metric names that can be passed to |
Notes
Some metrics intentionally expose both a short alias and a more explicit
function-style name, for example "r2" and "r2_score". They resolve
to the same scoring function.
fftrf.pearsonr(y_true, y_pred)
Compute column-wise Pearson correlation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_true
|
ndarray
|
Observed samples arranged as |
required |
y_pred
|
ndarray
|
Predicted samples with the same shape as |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
One correlation coefficient per output channel / feature. |
Notes
This is the default scoring metric used by :class:TRF. It is
intentionally lightweight and does not return p-values.
fftrf.r2_score(y_true, y_pred)
Compute column-wise coefficient of determination.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_true
|
ndarray
|
Observed samples arranged as |
required |
y_pred
|
ndarray
|
Predicted samples with the same shape as |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
One :math: |
Notes
Scores can become negative when predictions are worse than a constant mean predictor. This makes the metric suitable for model comparison and cross-validation because larger values remain better.
fftrf.explained_variance_score(y_true, y_pred)
Compute column-wise explained variance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_true
|
ndarray
|
Observed samples arranged as |
required |
y_pred
|
ndarray
|
Predicted samples with the same shape as |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
One explained-variance score per output column. |
Notes
Explained variance focuses on residual variance rather than absolute error
magnitude. Like :func:r2_score, larger values are better.
fftrf.neg_mse(y_true, y_pred)
Compute column-wise negative mean squared error.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_true
|
ndarray
|
Observed samples arranged as |
required |
y_pred
|
ndarray
|
Predicted samples with the same shape as |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
One negative-MSE score per output column. |
Notes
The sign convention matches :func:mtrf.stats.neg_mse: larger values are
better because the underlying MSE is multiplied by -1. Despite the
name, this function is therefore directly suitable for cross-validation
model selection in :class:TRF.