GeForce GPU giant has been data scraping 80 years’ worth of videos every day for AI training to ‘unlock various downstream applications critical to Nvidia’

"Full compliance with the letter and the spirit of copyright law,” says Nvidia.

"Full compliance with the letter and the spirit of copyright law,” says Nvidia.

Leaked documents, including spreadsheets, emails, and chat messages, show that Nvidia has been using millions of YouTube videos, Netflix, and other sources to train an AI model to be used in its Omniverse, autonomous vehicles, and digital avatar platforms.

The astonishing, but perhaps not surprising, scope of the data scraping was reported by 404 Media, who investigated the documents. It discovered that an internal project codenamed Cosmos (the same name but different to Nvidia’s Cosmos Deep Learning service) had staff use dozens of virtual PCs on Amazon Web Service (AWS) to download so many videos per day that Nvidia accumulated over 30 million URLs in the space of one month.

Copyright laws and usage rights were repeatedly discussed by the employees, who found some creative ways to prevent any direct violation of them. For example, Nvidia employed the use of Google’s cloud service to download the YouTube-8M dataset, as directly downloading the videos isn’t permitted by the terms of service. 

In a leaked Slack channel discussion, one person remarked that “we cleared the download with Google/YouTube ahead of time and dangled as a carrot that we were going to do so using Google Cloud. After all, usually, for 8 million videos, they would get lots of ad impressions, revenue they lose out on when downloading for training, so they should get some money out of it.”

404 Media asked Nvidia to comment on the legal and ethical aspects of using copyrighted material for AI training and the company replied that it was in “in full compliance with the letter and the spirit of copyright law.”

With some datasets, their use is only permitted for academic purposes and although Nvidia does conduct a considerable amount of research (internally and with other institutions), the leaked materials clearly show that this data scraping was intended for commercial purposes.

Nvidia isn’t the only firm to be doing this, of course—OpenAI and Runway have both been accused of knowingly using copyrighted and protected material to train their AI models. Interestingly, one source of video content that you’d think Nvidia would have no problem using is gameplay footage from its GeForce Now service—but the leaked documents show that’s not the case.

A senior research scientist at Nvidia explained why to other employees: “We don’t yet have statistics or video files yet, because the infras is not yet set up to capture lots of live game videos & actions. There’re both engineering & regulatory hurdles to hop through.”

AI models have to be trained on billions of data points and there’s no way around this. Some datasets have very clear rules for their use, whereas others have fairly loose restrictions, but when it comes to laws on the use of copyrighted materials, it’s very clear what can and can’t be done, even if the application of it to AI training isn’t 100% transparent.

AI, explained

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

It’s not just about copyright, either, as video content often contains personal data. While there isn’t a single, overriding federal law in the US that is directly applicable here, there are plenty of regulations concerning collecting and using personal data. In the EU, the General Data Protection Regulation (GPDR) is a law that is expressly clear on how such data can be used, even outside of the EU.

One might also wonder what would happen if a company such as Nvidia is found to have breached various regulations whilst training its AI models—if that system is being used across the globe, would it then be blocked in specific countries? Would the likes of Nvidia be willing to make a new model, trained with all permissions granted, just for those locations? Is it even possible to ‘untrain’ a system and start afresh with legally compliant data?

Whatever one feels about AI, it’s clear that there needs to be a more urgent push for transparency, especially when it concerns the use of copyrighted and personal data for commercial purposes. Because if tech companies aren’t held accountable, then data scraping will continue ad hoc.

About Post Author