In 2023, global governments are intensifying scrutiny and regulation of AI technologies due to their potential risks. Notable developments include G-7 countries endorsing international guiding principles for AI, US President Biden's Executive Order on AI safety, the UK Bletchley Declaration, and the UK Artificial Intelligence (Regulation) Bill. These actions signal governments' recognition of AI risks, emphasizing self-regulation by companies. Risks include biases, data privacy, discrimination, disinformation, fraud, deep fakes, job displacement, AI monopolies, and threats to national security. The legal challenge lies in the unique characteristics of AI systems, especially the "black box" problem, where decision-making processes can be opaque. The evolving regulatory framework focuses on responsible use and trustworthy technology, requiring human-centric design processes, transparency, fairness, and risk mitigation across the AI model lifecycle.