NEW
Two‑Level Uncertainty for Safe AI Ranking Models
As introduced in the Introduction to Two-Level Uncertainty section, AI ranking models face a critical challenge: non-stationarity . Market conditions, financial regimes, and data distributions shift over time, causing historically reliable models to fail suddenly. For example, the AI Stock Forecaster-a LightGBM ranker trained on U.S. equities-showed a 20-day RankIC drop from 0.072 to 0.010 during the 2024 AI thematic rally. This sharp decline in predictive power highlights how regime shifts invalidate signals, even for high-performing models. Traditional approaches treat ranking models as static tools, deploying them as if point predictions alone are sufficient. But this ignores epistemic uncertainty (model knowledge gaps) and aleatoric uncertainty (inherent data noise), as detailed in the Introduction to Two-Level Uncertainty section. The consequences are severe: overfitting to past regimes, underfitting to new conditions, and unsafe exposure to unpredictable risks. Without uncertainty-aware safeguards, models risk catastrophic performance drops during market transitions. Two-Level Uncertainty introduces a dual-layer framework to address these risks, as outlined in the Implementing Two-Level Uncertainty in AI Ranking Models section: