Business News Digital Labels & Publishers Legal

White House sets out commitments made by tech companies to manage risks posed by AI

By | Published on Tuesday 25 July 2023

The White House

As the music industry continues to set out its priorities regarding the regulation of artificial intelligence, last week the US government announced it had negotiated an agreement with seven leading AI companies. In that agreement, the tech businesses voluntarily commit to adhere to a set of practices designed to allay some of the concerns that have been raised about the rapid evolution of AI.

Of course, while the music industry – and other copyright industries – have a specific set of concerns about the training and use of generative AI, there are more general concerns for society at large about how AI is developed and employed.

In the main, it’s those more general concerns that are addressed in the agreement that was announced by US President Joe Biden last week. Although there are some commitments around transparency, which is an area where the music industry wants more regulation.

Announcing the commitments being made by Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, the Biden administration said it was encouraging the companies that are developing AI technologies “to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety”.

The US government’s statement continued: “These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security and trust – and mark a critical step toward developing responsible AI”.

Under ‘safety’ there are commitments regarding testing new AI systems and sharing information around managing AI risk. While under ‘security’ the companies commit to invest in “cybersecurity and insider threat safeguards” and to facilitate “third-party discovery and reporting of vulnerabilities in their AI systems”.

It’s under ‘trust’ that the transparency commitments appear. “The companies commit”, we are told, “to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception”.

You can read the full set of commitments here.

For the music industry, the key demands around AI regulation relate to copyright and transparency. First, the music community is seeking confirmation that whenever AI companies train their models with existing copyright-protected works, they must first seek permission from and negotiate a deal with whoever owns the copyright in that content.

When it comes to transparency, as well as wanting it to be clear when content is AI-generated – which is covered by last week’s commitments – the music industry also wants AI companies to be transparent about what data they have used to train their models, so that it is obvious whether any copyright protected works have been exploited.

These demands have been set out in various places in recent months, including via the Human Artistry Campaign and UK Music’s AI white paper. Plus just last week via a statement in relation to the incoming European Union AI Act and in the form of seven principles for regulating AI set out by songwriters, performers and their collecting societies.

READ MORE ABOUT: | | | | | |