In a significant development within the artificial intelligence (AI) sector, Google has announced its intention to sign the European Union’s (EU) AI Code of Practice. This voluntary framework is designed to assist AI developers in aligning with the forthcoming AI Act, the EU’s comprehensive legislation aimed at regulating AI technologies.
The AI Code of Practice, crafted by 13 independent experts, offers guidelines to ensure AI systems are safe, transparent, and respect fundamental human rights. It addresses critical areas such as transparency, copyright protection, and the safety and security of advanced AI systems, including chatbots like OpenAI’s ChatGPT. By signing this code, companies commit to adhering to these principles, thereby facilitating compliance with the AI Act.
Kent Walker, Google’s President of Global Affairs and Chief Legal Officer, expressed the company’s support for the initiative. In a blog post, he stated, We do so with the hope that this code, as applied, will promote European citizens’ and businesses’ access to secure, first-rate AI tools as they become available. However, Walker also voiced concerns that certain aspects of the AI Act and the Code of Practice might impede innovation. He highlighted potential issues such as deviations from established copyright laws, prolonged approval processes, and requirements that could expose trade secrets, all of which might hinder Europe’s competitiveness in the global AI landscape.
Google’s decision to sign the code aligns it with other AI companies like OpenAI and the French startup Mistral, both of which have also committed to the framework. Conversely, Meta Platforms has declined to sign, citing legal uncertainties and measures that extend beyond the scope of the AI Act. Joel Kaplan, Meta’s Global Affairs Chief, remarked that the code introduces ambiguities for model developers and could stifle AI innovation in Europe.
The AI Act, recognized as one of the most stringent AI regulatory frameworks globally, classifies AI applications based on risk levels—unacceptable, high, limited, and minimal—and imposes corresponding obligations. High-risk applications, such as biometric identification and critical infrastructure management, are subject to strict compliance requirements. The Act also mandates transparency in AI systems, including the disclosure of training data and adherence to EU copyright laws.
Despite the voluntary nature of the Code of Practice, companies that sign it are expected to benefit from increased legal certainty and reduced administrative burdens. The European Commission has emphasized that the code aims to foster innovation while ensuring safety and transparency in AI deployment across the EU. Henna Virkkunen, the Commission’s Executive Vice President, stated that the code marks an important step in making advanced AI models available in Europe in a manner that is both innovative and safe.
However, the implementation of the AI Act and the Code of Practice has not been without controversy. Several European CEOs from major firms, including Airbus and BNP Paribas, have called for a delay in the AI Act’s enforcement, arguing that the legislation’s complexity and legal uncertainties could harm Europe’s competitiveness in AI development. They have urged the European Commission to pause the upcoming AI Act, warning it could damage the continent’s competitiveness in global AI development.
The United States government has also expressed concerns, suggesting that parts of the EU’s digital agenda, including the AI Act, could create unjustified trade barriers. Despite these criticisms, the European Commission remains steadfast in its commitment to implementing the AI Act as planned, aiming to establish a global benchmark for AI governance.
As the August 2, 2025, deadline approaches for the AI Act’s provisions to take effect, the tech industry continues to grapple with the implications of these regulations. While companies like Google and OpenAI are aligning with the EU’s framework, others like Meta are opting out, highlighting the ongoing debate over the balance between regulation and innovation in the rapidly evolving field of artificial intelligence.