These risks relate to how the AI model was built, what data it was trained on, and potential biases that might be encoded in its design.
- Proxies for sensitive attributes - AI may use variables like zip code or school history as stand-ins (proxies) for race, income, or gender — often unintentionally reinforcing bias.
- Historical bias in training data - Past decisions (e.g., hiring or discipline outcomes) may reflect biased practices that the model learns to replicate.
- Incomplete context - AI often lacks the nuance of human judgment and situational awareness. Data might not include recent changes, human intent, or localized factors.
- Data Drift or Concept Drift - The model is trained on historical data, but real-world patterns have changed — leading to reduced accuracy over time.
- Over-reliance on Correlation - Model surfaces patterns without causal understanding, which can be misleading in high-stakes decisions.
- Poor Sampling or Representation - Certain groups or contexts are underrepresented in training data, leading to biased outputs or blind spots.
These risks concern the accuracy, reliability, and interpretation of the AI's outputs and predictions.
- Probabilistic vs. deterministic thinking - AI outputs are often presented with confidence, but they represent probabilities — not certainties. Users may over-trust high scores or numbers.
- Overfitting or narrow logic - The model might perform well in training but struggle with real-world complexity, especially when new conditions arise.
- Missing uncertainty indicators - AI rarely shows you what it doesn't know — lack of confidence intervals, assumptions, or alternative outcomes is a major risk.
- "Statistical but not practical" findings - Look for actionable insight, not just mathematical confidence.
- Illusion of Precision - Outputs are presented with decimal-level accuracy that gives a false sense of certainty.
- Inconsistent Performance Across Groups - The model works well for some subpopulations, but not others — and the difference isn't visible unless specifically tested.
These concerns relate to understanding how the AI makes decisions, whether its logic can be explained, and the ethical implications of its use.
- Hallucinations (in GenAI) - AI can invent facts, sources, or reasoning — especially in generative models — and do so fluently, which makes it easy to believe.
- Opaque logic ("black box" models) - Users may not understand how an answer was generated — making it difficult to verify or explain the result.
- Explainability gaps - Even when models offer outputs, the reasons or factors behind those predictions may not be accessible or intelligible to end users.
- No Accountability Chain - It's unclear who owns or oversees the decision made with AI. There's no escalation or appeal process.
These risks focus on how AI decisions affect people, particularly in high-stakes contexts, and whether appropriate human oversight is in place.
- Unintended consequences - A small prediction error in a high-stakes context (discipline, credit, hiring) can have major downstream effects on people.
- Fairness and equity tradeoffs - AI may optimize for accuracy or efficiency at the cost of inclusion, justice, or proportionality.
- Automation without oversight - Decisions made too quickly or too fully by AI — without human review — can lead to unethical or irreversible outcomes.
- Feedback Loop Effects - Prior AI decisions influence future data, reinforcing patterns even if they're flawed (e.g., predictive policing, content recommendations).