This startup wants to train art-generating AI strictly on licensed images

0
162

Generative AI, especially text-to-image AI, attracts as many lawsuits as venture capital.

Two companies behind popular AI art tools, Midjourney and Stability AI, are entangled in one legal case who claims they violated the rights of millions of artists by training their tools on web-scraped images. Separately, stock image provider Getty Images has taken Stability AI to court Reportedly using images from his site without permission to train Stable Diffusion, an art-generating AI.

The shortcomings of generative AI — a tendency to regurgitate the data it was trained on and, related to the composition of the training data — continue to put it in the legal crosshairs. But a new startup, briaclaims to minimize risk by training image-generating – and soon video-generating – AI in an “ethical” way.

“Our goal is to empower both developers and creators while ensuring that our platform is legal and ethical,” Yair Adato, Bria’s co-founder, told gotechbusiness.com in an email interview. “We’ve combined the best of visual generative AI technology and responsible AI practices to create a sustainable model that prioritizes these considerations.”

bria

Image Credits: bria

Adato co-founded Bria when the pandemic hit in 2020, with the company’s other co-founder Assa Eldar joining in 2022. During Adato’s Ph.D. After studying computer science at Ben-Gurion University of the Negev, he says he developed a passion for computer vision and its potential to “improve” communication through generative AI.

“I realized there’s a real business use for this,” Adato said. “The process of creating visuals is complex, manual and often requires specialized skills. Bria was founded to meet this challenge: a visual generative AI platform tailored to enterprises that digitizes and automates this entire process.”

Thanks to recent advancements in AI, both on the commercial and research side (open source models, the lower computation costs, etc.), there is no shortage of platforms that offer text-to-image AI art tools (Midjourney, DeviantArt, etc.). But Adato argues that Bria is different in that it (1) focuses solely on the enterprise and (2) was built from the ground up with ethical considerations in mind.

Bria’s platform enables businesses to create visuals for social media posts, advertisements, and e-commerce listings using its image-generating AI. Through a web app (an API is on its way) and Nvidia’s Picasso cloud AI service, customers can generate, modify or upload visuals and optionally enable a “brand watcher” feature, which attempts to ensure their visuals follow brand guidelines .

The AI ​​in question is trained on “authorized” datasets that include content that Bria licenses from partners, including individual photographers and artists, as well as media companies and stock photo repositories, which receive a portion of the startup’s revenue.

Bria isn’t the only company exploring a revenue-sharing business model for generative AI. Shutterstock recently launched Donors Fund reimburses creators whose work is used to train AI art models, while OpenAI has licensed a portion of Shutterstock’s library to train DALL-E 2, the image generation tool. Adobe, meanwhile, says it’s developing a compensation model for contributors to Adobe Stock, the stock content library, that will allow them to “monetize their talents” and take advantage of any revenue generated by its generative AI technology, Firefly.

But Bria’s approach is more comprehensive, says Adato. The companies The revenue sharing model rewards data owners based on the impact of their contributions, allowing artists to set prices per AI training.

Adato explains: “Each time an image is generated using Bria’s generative platform, we trace the images in the training set that contributed the most to the [generated art], and we use our technology to allocate revenue to the creators. This approach allows us to have multiple licensed resources in our training set, including artists, and avoid copyright infringement issues.

bria

Image Credits: bria

Bria also clearly watermarks all generated images on its platform and provides free access — or so it claims — to nonprofits and academics who “work to democratize creativity, prevent deepfakes, or promote diversity.”

In the coming months, Bria plans to take it one step further by offering an open source generative AI art model with a built-in attribution mechanism. Attempts have been made to this, such as Am I trained? And Stable attribution, sites that do their best to identify which artwork contributed to a given AI-generated image. But Bria’s model allows other generative platforms to make similar revenue-sharing arrangements with creators, Adato says.

It’s hard to say at many shares in Bria’s technology given the rise of the generative AI industry. For example, it’s not clear how Bria ‘coils’ images into the training sets and uses this data to distribute revenue. How does Bria resolve complaints from creators who claim they are unfairly underpaid? Will bugs in the system cause some creators to be overpaid? Time will tell.

Adato exudes the confidence you’d expect from a founder despite the unknowns, stating that Bria’s platform ensures that each contributor to the AI ​​training datasets gets their fair share based on usage and “real impact.”

“We believe that the most effective way to solve [the challenges around generative AI] is at the training set level, by using a high-performance, enterprise-grade, balanced and secure training set,” Adato said. “When it comes to adopting generative AI, companies need to consider the ethical and legal implications to ensure that the technology is used in a responsible and safe manner, but by working with Bria, companies can rest assured that these concerns are addressed.”

That’s an open question. And it’s not the only one.

What if a creator wants to opt out of Bria’s platform? Can they? Adato assures me they will be able to. But Bria uses its own opt-out mechanism as opposed to a common standard like DeviantArts or artist interest group spawn‘s, which offers a website where artists can remove their art from one of the more popular datasets for generative art training.

That adds to the burden on content creators, who may now have to worry about taking the steps to remove their art from yet another generative AI platform (unless, of course, they use a “camouflage” tool like Glaze, which their art becomes untrainable). Adato doesn’t see it that way.

“We have made it a priority to focus on secure and high-quality corporate data collections in the construction of our training sets to avoid biased or toxic data and copyright infringement,” he said. “Overall, our commitment to ethical and responsible training of AI models sets us apart from our competitors.”

Those competitors include incumbents such as OpenAI, Midjourney, and Stability AI, as well as Jasper, whose generative art tool, Jasper Art, also targets enterprise clients. However, the formidable competition – and outstanding ethical questions – don’t seem to have deterred investors – Bria has so far raised $10 million in venture capital from Entrée Capital, IN Venture, Getty Images and a group of Israeli angel investors.

bria

Image Credits: bria

Adato said Bria currently serves “a range” of clients, including marketing agencies, visual stock repositories, and technology and marketing companies. “We are committed to further expanding our customer base and providing them with innovative solutions for their visual communication needs,” he added.

If Bria succeeds, I partly wonder if it will spawn a new crop of generative AI companies with a narrower scope than today’s big players – and thus less susceptible to legal challenges. Of financing Before generative AI starts to cool off, in part because of the high level of competition and questions surrounding liability, more “narrow” generative AI startups might stand a chance of cutting through the noise — and avoiding lawsuits in the process.

We’ll have to wait.