ai safety

Hallucination
By

Hallucination

By

When an AI model produces confident but incorrect or fabricated information. Hallucinations highlight the need for better data validation, model tuning, and safeguards to maintain trust and reliability.

Guardrails
By

Guardrails

By

Safety mechanisms and ethical constraints that guide AI systems to operate responsibly. They prevent harmful or biased outputs and ensure transparency, accountability, and alignment with human values.

Emergent Behavior
By

Emergent Behavior

By

Unexpected or unprogrammed actions that arise as AI systems grow more complex. These behaviors can lead to surprising creativity or unpredictable outcomes, highlighting the importance of AI safety and alignment.