The rapid development of artificial intelligence (AI) has generated transformative opportunities alongside significant ethical, societal, and regulatory challenges. In this paper, we analyse this issue by considering the different approaches and regulatory frameworks of three main actors: the European Union (EU), the United States (US), and China. The analysis shows how they are adopting different strategies: the EU proposes a stringent, risk-based framework to ensure accountability and transparency; the US, traditionally favouring minimal intervention, is moving towards more structured regulation out of ethical and security concerns; and China has integrated AI as a core component of its national strategy, aligning AI development with state objectives and social stability.